2025-05-19 18:47:36.914249 | Job console starting 2025-05-19 18:47:36.927579 | Updating git repos 2025-05-19 18:47:37.014661 | Cloning repos into workspace 2025-05-19 18:47:37.171232 | Restoring repo states 2025-05-19 18:47:37.197213 | Merging changes 2025-05-19 18:47:37.197235 | Checking out repos 2025-05-19 18:47:37.454759 | Preparing playbooks 2025-05-19 18:47:38.148475 | Running Ansible setup 2025-05-19 18:47:43.531647 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-19 18:47:44.334989 | 2025-05-19 18:47:44.335183 | PLAY [Base pre] 2025-05-19 18:47:44.352684 | 2025-05-19 18:47:44.352838 | TASK [Setup log path fact] 2025-05-19 18:47:44.385029 | orchestrator | ok 2025-05-19 18:47:44.403448 | 2025-05-19 18:47:44.403590 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-19 18:47:44.447132 | orchestrator | ok 2025-05-19 18:47:44.459818 | 2025-05-19 18:47:44.459932 | TASK [emit-job-header : Print job information] 2025-05-19 18:47:44.505569 | # Job Information 2025-05-19 18:47:44.505839 | Ansible Version: 2.16.14 2025-05-19 18:47:44.505893 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-19 18:47:44.505945 | Pipeline: post 2025-05-19 18:47:44.505980 | Executor: 521e9411259a 2025-05-19 18:47:44.506012 | Triggered by: https://github.com/osism/testbed/commit/693e2117bf36bda22b7da910ba387549f8c29c9a 2025-05-19 18:47:44.506045 | Event ID: e38c10b4-34d6-11f0-8e4b-0c1161248bd8 2025-05-19 18:47:44.515241 | 2025-05-19 18:47:44.515375 | LOOP [emit-job-header : Print node information] 2025-05-19 18:47:44.658094 | orchestrator | ok: 2025-05-19 18:47:44.658589 | orchestrator | # Node Information 2025-05-19 18:47:44.658686 | orchestrator | Inventory Hostname: orchestrator 2025-05-19 18:47:44.658889 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-19 18:47:44.658995 | orchestrator | Username: zuul-testbed02 2025-05-19 18:47:44.659089 | orchestrator | Distro: Debian 12.11 2025-05-19 18:47:44.659201 | orchestrator | Provider: static-testbed 2025-05-19 18:47:44.659303 | orchestrator | Region: 2025-05-19 18:47:44.659401 | orchestrator | Label: testbed-orchestrator 2025-05-19 18:47:44.659489 | orchestrator | Product Name: OpenStack Nova 2025-05-19 18:47:44.659578 | orchestrator | Interface IP: 81.163.193.140 2025-05-19 18:47:44.685432 | 2025-05-19 18:47:44.685632 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-19 18:47:45.197729 | orchestrator -> localhost | changed 2025-05-19 18:47:45.210114 | 2025-05-19 18:47:45.210285 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-19 18:47:46.319023 | orchestrator -> localhost | changed 2025-05-19 18:47:46.342150 | 2025-05-19 18:47:46.342310 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-19 18:47:46.653303 | orchestrator -> localhost | ok 2025-05-19 18:47:46.669535 | 2025-05-19 18:47:46.669728 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-19 18:47:46.704973 | orchestrator | ok 2025-05-19 18:47:46.726928 | orchestrator | included: /var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-19 18:47:46.736079 | 2025-05-19 18:47:46.736183 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-19 18:47:47.618553 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-19 18:47:47.619257 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/work/38c228cfd6f947b9850ab9dad5977ef2_id_rsa 2025-05-19 18:47:47.619366 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/work/38c228cfd6f947b9850ab9dad5977ef2_id_rsa.pub 2025-05-19 18:47:47.619431 | orchestrator -> localhost | The key fingerprint is: 2025-05-19 18:47:47.619491 | orchestrator -> localhost | SHA256:H5v9GP56iEbHVmh6pGSrEC8zq7H7qDa/bieiLayStDs zuul-build-sshkey 2025-05-19 18:47:47.619547 | orchestrator -> localhost | The key's randomart image is: 2025-05-19 18:47:47.619623 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-19 18:47:47.619681 | orchestrator -> localhost | | | 2025-05-19 18:47:47.619764 | orchestrator -> localhost | | | 2025-05-19 18:47:47.619821 | orchestrator -> localhost | | . | 2025-05-19 18:47:47.619871 | orchestrator -> localhost | | . o + . | 2025-05-19 18:47:47.619922 | orchestrator -> localhost | | oSo.B . | 2025-05-19 18:47:47.619994 | orchestrator -> localhost | | . = ..=== | 2025-05-19 18:47:47.620073 | orchestrator -> localhost | |o.. . * o+=o. | 2025-05-19 18:47:47.620128 | orchestrator -> localhost | |+E+ o+o . o..+. | 2025-05-19 18:47:47.620182 | orchestrator -> localhost | |===BOB. . ++o | 2025-05-19 18:47:47.620234 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-19 18:47:47.620368 | orchestrator -> localhost | ok: Runtime: 0:00:00.344563 2025-05-19 18:47:47.642789 | 2025-05-19 18:47:47.643110 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-19 18:47:47.685750 | orchestrator | ok 2025-05-19 18:47:47.699480 | orchestrator | included: /var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-19 18:47:47.708932 | 2025-05-19 18:47:47.709035 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-19 18:47:47.732852 | orchestrator | skipping: Conditional result was False 2025-05-19 18:47:47.741190 | 2025-05-19 18:47:47.741302 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-19 18:47:48.358670 | orchestrator | changed 2025-05-19 18:47:48.368980 | 2025-05-19 18:47:48.369123 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-19 18:47:48.648165 | orchestrator | ok 2025-05-19 18:47:48.655587 | 2025-05-19 18:47:48.655702 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-19 18:47:49.076629 | orchestrator | ok 2025-05-19 18:47:49.100586 | 2025-05-19 18:47:49.100793 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-19 18:47:49.518272 | orchestrator | ok 2025-05-19 18:47:49.532460 | 2025-05-19 18:47:49.532824 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-19 18:47:49.571555 | orchestrator | skipping: Conditional result was False 2025-05-19 18:47:49.583285 | 2025-05-19 18:47:49.583435 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-19 18:47:50.158122 | orchestrator -> localhost | changed 2025-05-19 18:47:50.183763 | 2025-05-19 18:47:50.183986 | TASK [add-build-sshkey : Add back temp key] 2025-05-19 18:47:50.597592 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/work/38c228cfd6f947b9850ab9dad5977ef2_id_rsa (zuul-build-sshkey) 2025-05-19 18:47:50.598033 | orchestrator -> localhost | ok: Runtime: 0:00:00.017782 2025-05-19 18:47:50.607921 | 2025-05-19 18:47:50.608047 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-19 18:47:51.155103 | orchestrator | ok 2025-05-19 18:47:51.169408 | 2025-05-19 18:47:51.169602 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-19 18:47:51.215480 | orchestrator | skipping: Conditional result was False 2025-05-19 18:47:51.283298 | 2025-05-19 18:47:51.283440 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-19 18:47:51.712284 | orchestrator | ok 2025-05-19 18:47:51.723966 | 2025-05-19 18:47:51.724090 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-19 18:47:51.771909 | orchestrator | ok 2025-05-19 18:47:51.787060 | 2025-05-19 18:47:51.787235 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-19 18:47:52.147694 | orchestrator -> localhost | ok 2025-05-19 18:47:52.160096 | 2025-05-19 18:47:52.160235 | TASK [validate-host : Collect information about the host] 2025-05-19 18:47:53.396773 | orchestrator | ok 2025-05-19 18:47:53.418332 | 2025-05-19 18:47:53.418589 | TASK [validate-host : Sanitize hostname] 2025-05-19 18:47:53.497941 | orchestrator | ok 2025-05-19 18:47:53.507090 | 2025-05-19 18:47:53.507237 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-19 18:47:54.072046 | orchestrator -> localhost | changed 2025-05-19 18:47:54.080058 | 2025-05-19 18:47:54.080222 | TASK [validate-host : Collect information about zuul worker] 2025-05-19 18:47:54.538097 | orchestrator | ok 2025-05-19 18:47:54.544292 | 2025-05-19 18:47:54.544472 | TASK [validate-host : Write out all zuul information for each host] 2025-05-19 18:47:55.198580 | orchestrator -> localhost | changed 2025-05-19 18:47:55.215310 | 2025-05-19 18:47:55.215590 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-19 18:47:55.540510 | orchestrator | ok 2025-05-19 18:47:55.550349 | 2025-05-19 18:47:55.550486 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-19 18:48:31.357066 | orchestrator | changed: 2025-05-19 18:48:31.357380 | orchestrator | .d..t...... src/ 2025-05-19 18:48:31.357424 | orchestrator | .d..t...... src/github.com/ 2025-05-19 18:48:31.357455 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-19 18:48:31.357480 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-19 18:48:31.357505 | orchestrator | RedHat.yml 2025-05-19 18:48:31.369838 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-19 18:48:31.369857 | orchestrator | RedHat.yml 2025-05-19 18:48:31.369916 | orchestrator | = 2.2.0"... 2025-05-19 18:48:43.806523 | orchestrator | 18:48:43.806 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-19 18:48:43.888301 | orchestrator | 18:48:43.888 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-05-19 18:48:45.254654 | orchestrator | 18:48:45.254 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-19 18:48:46.342501 | orchestrator | 18:48:46.342 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-19 18:48:47.273595 | orchestrator | 18:48:47.273 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-19 18:48:48.175051 | orchestrator | 18:48:48.174 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-19 18:48:49.335321 | orchestrator | 18:48:49.335 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-19 18:48:50.361916 | orchestrator | 18:48:50.361 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-19 18:48:50.362007 | orchestrator | 18:48:50.361 STDOUT terraform: Providers are signed by their developers. 2025-05-19 18:48:50.362154 | orchestrator | 18:48:50.361 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-19 18:48:50.362167 | orchestrator | 18:48:50.361 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-19 18:48:50.362176 | orchestrator | 18:48:50.361 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-19 18:48:50.362196 | orchestrator | 18:48:50.362 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-19 18:48:50.362261 | orchestrator | 18:48:50.362 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-19 18:48:50.362276 | orchestrator | 18:48:50.362 STDOUT terraform: you run "tofu init" in the future. 2025-05-19 18:48:50.362351 | orchestrator | 18:48:50.362 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-19 18:48:50.362426 | orchestrator | 18:48:50.362 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-19 18:48:50.362522 | orchestrator | 18:48:50.362 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-19 18:48:50.362537 | orchestrator | 18:48:50.362 STDOUT terraform: should now work. 2025-05-19 18:48:50.362607 | orchestrator | 18:48:50.362 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-19 18:48:50.362697 | orchestrator | 18:48:50.362 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-19 18:48:50.362786 | orchestrator | 18:48:50.362 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-19 18:48:50.543954 | orchestrator | 18:48:50.543 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-19 18:48:50.771219 | orchestrator | 18:48:50.768 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-19 18:48:50.771277 | orchestrator | 18:48:50.768 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-19 18:48:50.771284 | orchestrator | 18:48:50.768 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-19 18:48:50.771289 | orchestrator | 18:48:50.768 STDOUT terraform: for this configuration. 2025-05-19 18:48:50.994075 | orchestrator | 18:48:50.991 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-19 18:48:51.085579 | orchestrator | 18:48:51.085 STDOUT terraform: ci.auto.tfvars 2025-05-19 18:48:51.742652 | orchestrator | 18:48:51.742 STDOUT terraform: default_custom.tf 2025-05-19 18:48:53.128044 | orchestrator | 18:48:53.127 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-05-19 18:48:54.902382 | orchestrator | 18:48:54.902 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-19 18:48:55.434413 | orchestrator | 18:48:55.434 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-19 18:48:55.653348 | orchestrator | 18:48:55.653 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-19 18:48:55.653462 | orchestrator | 18:48:55.653 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-19 18:48:55.653504 | orchestrator | 18:48:55.653 STDOUT terraform:  + create 2025-05-19 18:48:55.653523 | orchestrator | 18:48:55.653 STDOUT terraform:  <= read (data resources) 2025-05-19 18:48:55.653589 | orchestrator | 18:48:55.653 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-19 18:48:55.653868 | orchestrator | 18:48:55.653 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-19 18:48:55.653890 | orchestrator | 18:48:55.653 STDOUT terraform:  # (config refers to values not yet known) 2025-05-19 18:48:55.653966 | orchestrator | 18:48:55.653 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-19 18:48:55.653985 | orchestrator | 18:48:55.653 STDOUT terraform:  + checksum = (known after apply) 2025-05-19 18:48:55.654097 | orchestrator | 18:48:55.653 STDOUT terraform:  + created_at = (known after apply) 2025-05-19 18:48:55.654118 | orchestrator | 18:48:55.654 STDOUT terraform:  + file = (known after apply) 2025-05-19 18:48:55.654182 | orchestrator | 18:48:55.654 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.654227 | orchestrator | 18:48:55.654 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.654274 | orchestrator | 18:48:55.654 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-19 18:48:55.654323 | orchestrator | 18:48:55.654 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-19 18:48:55.654358 | orchestrator | 18:48:55.654 STDOUT terraform:  + most_recent = true 2025-05-19 18:48:55.654396 | orchestrator | 18:48:55.654 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.654454 | orchestrator | 18:48:55.654 STDOUT terraform:  + protected = (known after apply) 2025-05-19 18:48:55.654551 | orchestrator | 18:48:55.654 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.654604 | orchestrator | 18:48:55.654 STDOUT terraform:  + schema = (known after apply) 2025-05-19 18:48:55.654680 | orchestrator | 18:48:55.654 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-19 18:48:55.654737 | orchestrator | 18:48:55.654 STDOUT terraform:  + tags = (known after apply) 2025-05-19 18:48:55.654787 | orchestrator | 18:48:55.654 STDOUT terraform:  + updated_at = (known after apply) 2025-05-19 18:48:55.654804 | orchestrator | 18:48:55.654 STDOUT terraform:  } 2025-05-19 18:48:55.655064 | orchestrator | 18:48:55.654 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-19 18:48:55.655108 | orchestrator | 18:48:55.655 STDOUT terraform:  # (config refers to values not yet known) 2025-05-19 18:48:55.655171 | orchestrator | 18:48:55.655 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-19 18:48:55.655216 | orchestrator | 18:48:55.655 STDOUT terraform:  + checksum = (known after apply) 2025-05-19 18:48:55.655265 | orchestrator | 18:48:55.655 STDOUT terraform:  + created_at = (known after apply) 2025-05-19 18:48:55.655314 | orchestrator | 18:48:55.655 STDOUT terraform:  + file = (known after apply) 2025-05-19 18:48:55.655361 | orchestrator | 18:48:55.655 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.655407 | orchestrator | 18:48:55.655 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.655454 | orchestrator | 18:48:55.655 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-19 18:48:55.655552 | orchestrator | 18:48:55.655 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-19 18:48:55.655584 | orchestrator | 18:48:55.655 STDOUT terraform:  + most_recent = true 2025-05-19 18:48:55.655633 | orchestrator | 18:48:55.655 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.655682 | orchestrator | 18:48:55.655 STDOUT terraform:  + protected = (known after apply) 2025-05-19 18:48:55.655731 | orchestrator | 18:48:55.655 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.655779 | orchestrator | 18:48:55.655 STDOUT terraform:  + schema = (known after apply) 2025-05-19 18:48:55.655835 | orchestrator | 18:48:55.655 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-19 18:48:55.655876 | orchestrator | 18:48:55.655 STDOUT terraform:  + tags = (known after apply) 2025-05-19 18:48:55.655922 | orchestrator | 18:48:55.655 STDOUT terraform:  + updated_at = (known after apply) 2025-05-19 18:48:55.655938 | orchestrator | 18:48:55.655 STDOUT terraform:  } 2025-05-19 18:48:55.655998 | orchestrator | 18:48:55.655 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-19 18:48:55.656045 | orchestrator | 18:48:55.655 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-19 18:48:55.656103 | orchestrator | 18:48:55.656 STDOUT terraform:  + content = (known after apply) 2025-05-19 18:48:55.656160 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 18:48:55.656219 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 18:48:55.656277 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 18:48:55.656333 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 18:48:55.656392 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 18:48:55.656447 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 18:48:55.656499 | orchestrator | 18:48:55.656 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 18:48:55.656537 | orchestrator | 18:48:55.656 STDOUT terraform:  + file_permission = "0644" 2025-05-19 18:48:55.656596 | orchestrator | 18:48:55.656 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-19 18:48:55.656655 | orchestrator | 18:48:55.656 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.656672 | orchestrator | 18:48:55.656 STDOUT terraform:  } 2025-05-19 18:48:55.656717 | orchestrator | 18:48:55.656 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-19 18:48:55.656757 | orchestrator | 18:48:55.656 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-19 18:48:55.656831 | orchestrator | 18:48:55.656 STDOUT terraform:  + content = (known after apply) 2025-05-19 18:48:55.656891 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 18:48:55.656947 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 18:48:55.657004 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 18:48:55.657061 | orchestrator | 18:48:55.656 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 18:48:55.657117 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 18:48:55.657175 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 18:48:55.657214 | orchestrator | 18:48:55.657 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 18:48:55.657262 | orchestrator | 18:48:55.657 STDOUT terraform:  + file_permission = "0644" 2025-05-19 18:48:55.657333 | orchestrator | 18:48:55.657 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-19 18:48:55.657394 | orchestrator | 18:48:55.657 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.657411 | orchestrator | 18:48:55.657 STDOUT terraform:  } 2025-05-19 18:48:55.657462 | orchestrator | 18:48:55.657 STDOUT terraform:  # local_file.inventory will be created 2025-05-19 18:48:55.657515 | orchestrator | 18:48:55.657 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-19 18:48:55.657574 | orchestrator | 18:48:55.657 STDOUT terraform:  + content = (known after apply) 2025-05-19 18:48:55.657632 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 18:48:55.657689 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 18:48:55.657746 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 18:48:55.657816 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 18:48:55.657868 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 18:48:55.657923 | orchestrator | 18:48:55.657 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 18:48:55.657962 | orchestrator | 18:48:55.657 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 18:48:55.658004 | orchestrator | 18:48:55.657 STDOUT terraform:  + file_permission = "0644" 2025-05-19 18:48:55.658075 | orchestrator | 18:48:55.657 STDOUT terraform:  + filename = "inventory.ci" 2025-05-19 18:48:55.658135 | orchestrator | 18:48:55.658 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.658152 | orchestrator | 18:48:55.658 STDOUT terraform:  } 2025-05-19 18:48:55.658197 | orchestrator | 18:48:55.658 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-19 18:48:55.658245 | orchestrator | 18:48:55.658 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-19 18:48:55.658297 | orchestrator | 18:48:55.658 STDOUT terraform:  + content = (sensitive value) 2025-05-19 18:48:55.658353 | orchestrator | 18:48:55.658 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 18:48:55.658412 | orchestrator | 18:48:55.658 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 18:48:55.658489 | orchestrator | 18:48:55.658 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 18:48:55.658562 | orchestrator | 18:48:55.658 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 18:48:55.658617 | orchestrator | 18:48:55.658 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 18:48:55.658671 | orchestrator | 18:48:55.658 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 18:48:55.658704 | orchestrator | 18:48:55.658 STDOUT terraform:  + directory_permission = "0700" 2025-05-19 18:48:55.658745 | orchestrator | 18:48:55.658 STDOUT terraform:  + file_permission = "0600" 2025-05-19 18:48:55.658783 | orchestrator | 18:48:55.658 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-19 18:48:55.658837 | orchestrator | 18:48:55.658 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.658854 | orchestrator | 18:48:55.658 STDOUT terraform:  } 2025-05-19 18:48:55.658891 | orchestrator | 18:48:55.658 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-19 18:48:55.658935 | orchestrator | 18:48:55.658 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-19 18:48:55.658965 | orchestrator | 18:48:55.658 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.658980 | orchestrator | 18:48:55.658 STDOUT terraform:  } 2025-05-19 18:48:55.659052 | orchestrator | 18:48:55.658 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-19 18:48:55.659119 | orchestrator | 18:48:55.659 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-19 18:48:55.659162 | orchestrator | 18:48:55.659 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.659188 | orchestrator | 18:48:55.659 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.659234 | orchestrator | 18:48:55.659 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.659276 | orchestrator | 18:48:55.659 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.659321 | orchestrator | 18:48:55.659 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.659374 | orchestrator | 18:48:55.659 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-19 18:48:55.659424 | orchestrator | 18:48:55.659 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.659440 | orchestrator | 18:48:55.659 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.659505 | orchestrator | 18:48:55.659 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.659520 | orchestrator | 18:48:55.659 STDOUT terraform:  } 2025-05-19 18:48:55.659568 | orchestrator | 18:48:55.659 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-19 18:48:55.659633 | orchestrator | 18:48:55.659 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 18:48:55.659681 | orchestrator | 18:48:55.659 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.659707 | orchestrator | 18:48:55.659 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.659747 | orchestrator | 18:48:55.659 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.659811 | orchestrator | 18:48:55.659 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.659863 | orchestrator | 18:48:55.659 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.659919 | orchestrator | 18:48:55.659 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-19 18:48:55.659959 | orchestrator | 18:48:55.659 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.659975 | orchestrator | 18:48:55.659 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.660010 | orchestrator | 18:48:55.659 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.660051 | orchestrator | 18:48:55.660 STDOUT terraform:  } 2025-05-19 18:48:55.660099 | orchestrator | 18:48:55.660 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-19 18:48:55.660158 | orchestrator | 18:48:55.660 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 18:48:55.660206 | orchestrator | 18:48:55.660 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.660222 | orchestrator | 18:48:55.660 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.660285 | orchestrator | 18:48:55.660 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.660320 | orchestrator | 18:48:55.660 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.660364 | orchestrator | 18:48:55.660 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.660420 | orchestrator | 18:48:55.660 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-19 18:48:55.660462 | orchestrator | 18:48:55.660 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.660550 | orchestrator | 18:48:55.660 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.660564 | orchestrator | 18:48:55.660 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.660576 | orchestrator | 18:48:55.660 STDOUT terraform:  } 2025-05-19 18:48:55.660605 | orchestrator | 18:48:55.660 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-19 18:48:55.660669 | orchestrator | 18:48:55.660 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 18:48:55.660719 | orchestrator | 18:48:55.660 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.660749 | orchestrator | 18:48:55.660 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.660794 | orchestrator | 18:48:55.660 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.660837 | orchestrator | 18:48:55.660 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.660880 | orchestrator | 18:48:55.660 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.660938 | orchestrator | 18:48:55.660 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-19 18:48:55.660983 | orchestrator | 18:48:55.660 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.661012 | orchestrator | 18:48:55.660 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.661041 | orchestrator | 18:48:55.661 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.661055 | orchestrator | 18:48:55.661 STDOUT terraform:  } 2025-05-19 18:48:55.661120 | orchestrator | 18:48:55.661 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-19 18:48:55.661184 | orchestrator | 18:48:55.661 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 18:48:55.661230 | orchestrator | 18:48:55.661 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.661259 | orchestrator | 18:48:55.661 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.661305 | orchestrator | 18:48:55.661 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.661348 | orchestrator | 18:48:55.661 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.661393 | orchestrator | 18:48:55.661 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.661448 | orchestrator | 18:48:55.661 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-19 18:48:55.661513 | orchestrator | 18:48:55.661 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.661530 | orchestrator | 18:48:55.661 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.661566 | orchestrator | 18:48:55.661 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.661581 | orchestrator | 18:48:55.661 STDOUT terraform:  } 2025-05-19 18:48:55.661661 | orchestrator | 18:48:55.661 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-19 18:48:55.661723 | orchestrator | 18:48:55.661 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 18:48:55.661765 | orchestrator | 18:48:55.661 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.661795 | orchestrator | 18:48:55.661 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.661841 | orchestrator | 18:48:55.661 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.661884 | orchestrator | 18:48:55.661 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.661932 | orchestrator | 18:48:55.661 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.661984 | orchestrator | 18:48:55.661 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-19 18:48:55.662053 | orchestrator | 18:48:55.661 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.662071 | orchestrator | 18:48:55.662 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.662100 | orchestrator | 18:48:55.662 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.662114 | orchestrator | 18:48:55.662 STDOUT terraform:  } 2025-05-19 18:48:55.662182 | orchestrator | 18:48:55.662 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-19 18:48:55.662247 | orchestrator | 18:48:55.662 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 18:48:55.662290 | orchestrator | 18:48:55.662 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.662325 | orchestrator | 18:48:55.662 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.662363 | orchestrator | 18:48:55.662 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.662408 | orchestrator | 18:48:55.662 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.662451 | orchestrator | 18:48:55.662 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.662536 | orchestrator | 18:48:55.662 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-19 18:48:55.662575 | orchestrator | 18:48:55.662 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.662590 | orchestrator | 18:48:55.662 STDOUT terraform:  + size = 80 2025-05-19 18:48:55.662622 | orchestrator | 18:48:55.662 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.662636 | orchestrator | 18:48:55.662 STDOUT terraform:  } 2025-05-19 18:48:55.662695 | orchestrator | 18:48:55.662 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-19 18:48:55.662750 | orchestrator | 18:48:55.662 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.662789 | orchestrator | 18:48:55.662 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.662804 | orchestrator | 18:48:55.662 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.662853 | orchestrator | 18:48:55.662 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.662892 | orchestrator | 18:48:55.662 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.662941 | orchestrator | 18:48:55.662 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-19 18:48:55.662979 | orchestrator | 18:48:55.662 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.663002 | orchestrator | 18:48:55.662 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.663031 | orchestrator | 18:48:55.662 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.663045 | orchestrator | 18:48:55.663 STDOUT terraform:  } 2025-05-19 18:48:55.663115 | orchestrator | 18:48:55.663 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-19 18:48:55.663170 | orchestrator | 18:48:55.663 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.663209 | orchestrator | 18:48:55.663 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.663223 | orchestrator | 18:48:55.663 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.663271 | orchestrator | 18:48:55.663 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.663312 | orchestrator | 18:48:55.663 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.663359 | orchestrator | 18:48:55.663 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-19 18:48:55.663399 | orchestrator | 18:48:55.663 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.663427 | orchestrator | 18:48:55.663 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.663447 | orchestrator | 18:48:55.663 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.663460 | orchestrator | 18:48:55.663 STDOUT terraform:  } 2025-05-19 18:48:55.663523 | orchestrator | 18:48:55.663 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-19 18:48:55.663578 | orchestrator | 18:48:55.663 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.663617 | orchestrator | 18:48:55.663 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.663644 | orchestrator | 18:48:55.663 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.663686 | orchestrator | 18:48:55.663 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.663721 | orchestrator | 18:48:55.663 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.663771 | orchestrator | 18:48:55.663 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-19 18:48:55.663810 | orchestrator | 18:48:55.663 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.663836 | orchestrator | 18:48:55.663 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.663864 | orchestrator | 18:48:55.663 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.663876 | orchestrator | 18:48:55.663 STDOUT terraform:  } 2025-05-19 18:48:55.663934 | orchestrator | 18:48:55.663 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-19 18:48:55.663989 | orchestrator | 18:48:55.663 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.664027 | orchestrator | 18:48:55.663 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.664054 | orchestrator | 18:48:55.664 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.664092 | orchestrator | 18:48:55.664 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.664132 | orchestrator | 18:48:55.664 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.664180 | orchestrator | 18:48:55.664 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-19 18:48:55.664220 | orchestrator | 18:48:55.664 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.664246 | orchestrator | 18:48:55.664 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.664273 | orchestrator | 18:48:55.664 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.664285 | orchestrator | 18:48:55.664 STDOUT terraform:  } 2025-05-19 18:48:55.664342 | orchestrator | 18:48:55.664 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-19 18:48:55.664396 | orchestrator | 18:48:55.664 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.664435 | orchestrator | 18:48:55.664 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.664461 | orchestrator | 18:48:55.664 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.664512 | orchestrator | 18:48:55.664 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.664551 | orchestrator | 18:48:55.664 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.664599 | orchestrator | 18:48:55.664 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-19 18:48:55.664637 | orchestrator | 18:48:55.664 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.664669 | orchestrator | 18:48:55.664 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.664691 | orchestrator | 18:48:55.664 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.664702 | orchestrator | 18:48:55.664 STDOUT terraform:  } 2025-05-19 18:48:55.664762 | orchestrator | 18:48:55.664 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-19 18:48:55.664817 | orchestrator | 18:48:55.664 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.664856 | orchestrator | 18:48:55.664 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.664882 | orchestrator | 18:48:55.664 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.664921 | orchestrator | 18:48:55.664 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.664961 | orchestrator | 18:48:55.664 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.665008 | orchestrator | 18:48:55.664 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-19 18:48:55.665046 | orchestrator | 18:48:55.665 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.665073 | orchestrator | 18:48:55.665 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.665100 | orchestrator | 18:48:55.665 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.665111 | orchestrator | 18:48:55.665 STDOUT terraform:  } 2025-05-19 18:48:55.665175 | orchestrator | 18:48:55.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-19 18:48:55.665230 | orchestrator | 18:48:55.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.665268 | orchestrator | 18:48:55.665 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.665293 | orchestrator | 18:48:55.665 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.665333 | orchestrator | 18:48:55.665 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.665372 | orchestrator | 18:48:55.665 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.665420 | orchestrator | 18:48:55.665 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-19 18:48:55.665459 | orchestrator | 18:48:55.665 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.665496 | orchestrator | 18:48:55.665 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.665523 | orchestrator | 18:48:55.665 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.665534 | orchestrator | 18:48:55.665 STDOUT terraform:  } 2025-05-19 18:48:55.665591 | orchestrator | 18:48:55.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-19 18:48:55.665644 | orchestrator | 18:48:55.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.665682 | orchestrator | 18:48:55.665 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.665709 | orchestrator | 18:48:55.665 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.665749 | orchestrator | 18:48:55.665 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.665787 | orchestrator | 18:48:55.665 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.665837 | orchestrator | 18:48:55.665 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-19 18:48:55.665877 | orchestrator | 18:48:55.665 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.665902 | orchestrator | 18:48:55.665 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.665932 | orchestrator | 18:48:55.665 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.665943 | orchestrator | 18:48:55.665 STDOUT terraform:  } 2025-05-19 18:48:55.666000 | orchestrator | 18:48:55.665 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-19 18:48:55.666077 | orchestrator | 18:48:55.665 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 18:48:55.666109 | orchestrator | 18:48:55.666 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 18:48:55.666135 | orchestrator | 18:48:55.666 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.666176 | orchestrator | 18:48:55.666 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.666215 | orchestrator | 18:48:55.666 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 18:48:55.666262 | orchestrator | 18:48:55.666 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-19 18:48:55.666301 | orchestrator | 18:48:55.666 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.666327 | orchestrator | 18:48:55.666 STDOUT terraform:  + size = 20 2025-05-19 18:48:55.666353 | orchestrator | 18:48:55.666 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 18:48:55.666371 | orchestrator | 18:48:55.666 STDOUT terraform:  } 2025-05-19 18:48:55.666420 | orchestrator | 18:48:55.666 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-19 18:48:55.666515 | orchestrator | 18:48:55.666 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-19 18:48:55.666545 | orchestrator | 18:48:55.666 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.666594 | orchestrator | 18:48:55.666 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.666638 | orchestrator | 18:48:55.666 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.666679 | orchestrator | 18:48:55.666 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.666709 | orchestrator | 18:48:55.666 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.666736 | orchestrator | 18:48:55.666 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.666780 | orchestrator | 18:48:55.666 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.666825 | orchestrator | 18:48:55.666 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.666865 | orchestrator | 18:48:55.666 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-19 18:48:55.666896 | orchestrator | 18:48:55.666 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.666939 | orchestrator | 18:48:55.666 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.666986 | orchestrator | 18:48:55.666 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.667030 | orchestrator | 18:48:55.666 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.667061 | orchestrator | 18:48:55.667 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.667098 | orchestrator | 18:48:55.667 STDOUT terraform:  + name = "testbed-manager" 2025-05-19 18:48:55.667128 | orchestrator | 18:48:55.667 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.667171 | orchestrator | 18:48:55.667 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.667213 | orchestrator | 18:48:55.667 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.667241 | orchestrator | 18:48:55.667 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.667285 | orchestrator | 18:48:55.667 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.667328 | orchestrator | 18:48:55.667 STDOUT terraform:  + user_data = (known after apply) 2025-05-19 18:48:55.667339 | orchestrator | 18:48:55.667 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.667373 | orchestrator | 18:48:55.667 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.667409 | orchestrator | 18:48:55.667 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.667445 | orchestrator | 18:48:55.667 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.667531 | orchestrator | 18:48:55.667 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.667541 | orchestrator | 18:48:55.667 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.667562 | orchestrator | 18:48:55.667 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.667571 | orchestrator | 18:48:55.667 STDOUT terraform:  } 2025-05-19 18:48:55.667585 | orchestrator | 18:48:55.667 STDOUT terraform:  + network { 2025-05-19 18:48:55.667609 | orchestrator | 18:48:55.667 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.667646 | orchestrator | 18:48:55.667 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.667683 | orchestrator | 18:48:55.667 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.667777 | orchestrator | 18:48:55.667 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.667812 | orchestrator | 18:48:55.667 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.667850 | orchestrator | 18:48:55.667 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.667888 | orchestrator | 18:48:55.667 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.667898 | orchestrator | 18:48:55.667 STDOUT terraform:  } 2025-05-19 18:48:55.667919 | orchestrator | 18:48:55.667 STDOUT terraform:  } 2025-05-19 18:48:55.667995 | orchestrator | 18:48:55.667 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-19 18:48:55.668045 | orchestrator | 18:48:55.667 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 18:48:55.668088 | orchestrator | 18:48:55.668 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.668130 | orchestrator | 18:48:55.668 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.668171 | orchestrator | 18:48:55.668 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.668214 | orchestrator | 18:48:55.668 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.668244 | orchestrator | 18:48:55.668 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.668271 | orchestrator | 18:48:55.668 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.668313 | orchestrator | 18:48:55.668 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.668357 | orchestrator | 18:48:55.668 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.668393 | orchestrator | 18:48:55.668 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 18:48:55.668421 | orchestrator | 18:48:55.668 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.668463 | orchestrator | 18:48:55.668 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.668520 | orchestrator | 18:48:55.668 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.668562 | orchestrator | 18:48:55.668 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.668593 | orchestrator | 18:48:55.668 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.668630 | orchestrator | 18:48:55.668 STDOUT terraform:  + name = "testbed-node-0" 2025-05-19 18:48:55.668659 | orchestrator | 18:48:55.668 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.668702 | orchestrator | 18:48:55.668 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.668744 | orchestrator | 18:48:55.668 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.668771 | orchestrator | 18:48:55.668 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.668814 | orchestrator | 18:48:55.668 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.668876 | orchestrator | 18:48:55.668 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 18:48:55.668887 | orchestrator | 18:48:55.668 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.668922 | orchestrator | 18:48:55.668 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.668957 | orchestrator | 18:48:55.668 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.669002 | orchestrator | 18:48:55.668 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.669027 | orchestrator | 18:48:55.668 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.669078 | orchestrator | 18:48:55.669 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.669124 | orchestrator | 18:48:55.669 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.669135 | orchestrator | 18:48:55.669 STDOUT terraform:  } 2025-05-19 18:48:55.669145 | orchestrator | 18:48:55.669 STDOUT terraform:  + network { 2025-05-19 18:48:55.669174 | orchestrator | 18:48:55.669 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.669211 | orchestrator | 18:48:55.669 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.669248 | orchestrator | 18:48:55.669 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.669286 | orchestrator | 18:48:55.669 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.669329 | orchestrator | 18:48:55.669 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.669364 | orchestrator | 18:48:55.669 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.669403 | orchestrator | 18:48:55.669 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.669413 | orchestrator | 18:48:55.669 STDOUT terraform:  } 2025-05-19 18:48:55.669433 | orchestrator | 18:48:55.669 STDOUT terraform:  } 2025-05-19 18:48:55.669511 | orchestrator | 18:48:55.669 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-19 18:48:55.669563 | orchestrator | 18:48:55.669 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 18:48:55.669606 | orchestrator | 18:48:55.669 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.669648 | orchestrator | 18:48:55.669 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.669691 | orchestrator | 18:48:55.669 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.669732 | orchestrator | 18:48:55.669 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.669789 | orchestrator | 18:48:55.669 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.669806 | orchestrator | 18:48:55.669 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.669849 | orchestrator | 18:48:55.669 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.669903 | orchestrator | 18:48:55.669 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.669935 | orchestrator | 18:48:55.669 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 18:48:55.669958 | orchestrator | 18:48:55.669 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.670003 | orchestrator | 18:48:55.669 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.670060 | orchestrator | 18:48:55.669 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.670100 | orchestrator | 18:48:55.670 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.670132 | orchestrator | 18:48:55.670 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.670156 | orchestrator | 18:48:55.670 STDOUT terraform:  + name = "testbed-node-1" 2025-05-19 18:48:55.670183 | orchestrator | 18:48:55.670 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.670222 | orchestrator | 18:48:55.670 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.670260 | orchestrator | 18:48:55.670 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.670285 | orchestrator | 18:48:55.670 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.670326 | orchestrator | 18:48:55.670 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.670379 | orchestrator | 18:48:55.670 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 18:48:55.670390 | orchestrator | 18:48:55.670 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.670418 | orchestrator | 18:48:55.670 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.670453 | orchestrator | 18:48:55.670 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.670496 | orchestrator | 18:48:55.670 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.670523 | orchestrator | 18:48:55.670 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.670555 | orchestrator | 18:48:55.670 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.670597 | orchestrator | 18:48:55.670 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.670607 | orchestrator | 18:48:55.670 STDOUT terraform:  } 2025-05-19 18:48:55.670617 | orchestrator | 18:48:55.670 STDOUT terraform:  + network { 2025-05-19 18:48:55.670642 | orchestrator | 18:48:55.670 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.670676 | orchestrator | 18:48:55.670 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.670710 | orchestrator | 18:48:55.670 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.670744 | orchestrator | 18:48:55.670 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.670779 | orchestrator | 18:48:55.670 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.670820 | orchestrator | 18:48:55.670 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.670851 | orchestrator | 18:48:55.670 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.670862 | orchestrator | 18:48:55.670 STDOUT terraform:  } 2025-05-19 18:48:55.670871 | orchestrator | 18:48:55.670 STDOUT terraform:  } 2025-05-19 18:48:55.670917 | orchestrator | 18:48:55.670 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-19 18:48:55.670963 | orchestrator | 18:48:55.670 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 18:48:55.671002 | orchestrator | 18:48:55.670 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.671040 | orchestrator | 18:48:55.670 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.671077 | orchestrator | 18:48:55.671 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.671116 | orchestrator | 18:48:55.671 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.671145 | orchestrator | 18:48:55.671 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.671155 | orchestrator | 18:48:55.671 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.671200 | orchestrator | 18:48:55.671 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.671238 | orchestrator | 18:48:55.671 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.671281 | orchestrator | 18:48:55.671 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 18:48:55.671303 | orchestrator | 18:48:55.671 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.671344 | orchestrator | 18:48:55.671 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.671381 | orchestrator | 18:48:55.671 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.671420 | orchestrator | 18:48:55.671 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.671446 | orchestrator | 18:48:55.671 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.671518 | orchestrator | 18:48:55.671 STDOUT terraform:  + name = "testbed-node-2" 2025-05-19 18:48:55.671529 | orchestrator | 18:48:55.671 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.671550 | orchestrator | 18:48:55.671 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.671588 | orchestrator | 18:48:55.671 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.671612 | orchestrator | 18:48:55.671 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.671654 | orchestrator | 18:48:55.671 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.671706 | orchestrator | 18:48:55.671 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 18:48:55.671717 | orchestrator | 18:48:55.671 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.671744 | orchestrator | 18:48:55.671 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.671778 | orchestrator | 18:48:55.671 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.671810 | orchestrator | 18:48:55.671 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.671842 | orchestrator | 18:48:55.671 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.671908 | orchestrator | 18:48:55.671 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.671950 | orchestrator | 18:48:55.671 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.671960 | orchestrator | 18:48:55.671 STDOUT terraform:  } 2025-05-19 18:48:55.671979 | orchestrator | 18:48:55.671 STDOUT terraform:  + network { 2025-05-19 18:48:55.672003 | orchestrator | 18:48:55.671 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.672039 | orchestrator | 18:48:55.671 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.672074 | orchestrator | 18:48:55.672 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.672110 | orchestrator | 18:48:55.672 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.672145 | orchestrator | 18:48:55.672 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.672182 | orchestrator | 18:48:55.672 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.672217 | orchestrator | 18:48:55.672 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.672227 | orchestrator | 18:48:55.672 STDOUT terraform:  } 2025-05-19 18:48:55.672235 | orchestrator | 18:48:55.672 STDOUT terraform:  } 2025-05-19 18:48:55.672305 | orchestrator | 18:48:55.672 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-19 18:48:55.672349 | orchestrator | 18:48:55.672 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 18:48:55.672388 | orchestrator | 18:48:55.672 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.672425 | orchestrator | 18:48:55.672 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.672464 | orchestrator | 18:48:55.672 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.672527 | orchestrator | 18:48:55.672 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.672553 | orchestrator | 18:48:55.672 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.672577 | orchestrator | 18:48:55.672 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.672616 | orchestrator | 18:48:55.672 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.672655 | orchestrator | 18:48:55.672 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.672692 | orchestrator | 18:48:55.672 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 18:48:55.672713 | orchestrator | 18:48:55.672 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.672755 | orchestrator | 18:48:55.672 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.672793 | orchestrator | 18:48:55.672 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.672831 | orchestrator | 18:48:55.672 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.672854 | orchestrator | 18:48:55.672 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.672890 | orchestrator | 18:48:55.672 STDOUT terraform:  + name = "testbed-node-3" 2025-05-19 18:48:55.672920 | orchestrator | 18:48:55.672 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.672956 | orchestrator | 18:48:55.672 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.672993 | orchestrator | 18:48:55.672 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.673008 | orchestrator | 18:48:55.672 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.673046 | orchestrator | 18:48:55.673 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.673116 | orchestrator | 18:48:55.673 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 18:48:55.673145 | orchestrator | 18:48:55.673 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.673174 | orchestrator | 18:48:55.673 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.673217 | orchestrator | 18:48:55.673 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.673251 | orchestrator | 18:48:55.673 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.673276 | orchestrator | 18:48:55.673 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.673307 | orchestrator | 18:48:55.673 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.673345 | orchestrator | 18:48:55.673 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.673355 | orchestrator | 18:48:55.673 STDOUT terraform:  } 2025-05-19 18:48:55.673363 | orchestrator | 18:48:55.673 STDOUT terraform:  + network { 2025-05-19 18:48:55.673388 | orchestrator | 18:48:55.673 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.673420 | orchestrator | 18:48:55.673 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.673452 | orchestrator | 18:48:55.673 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.673514 | orchestrator | 18:48:55.673 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.673526 | orchestrator | 18:48:55.673 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.673553 | orchestrator | 18:48:55.673 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.673587 | orchestrator | 18:48:55.673 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.673598 | orchestrator | 18:48:55.673 STDOUT terraform:  } 2025-05-19 18:48:55.673606 | orchestrator | 18:48:55.673 STDOUT terraform:  } 2025-05-19 18:48:55.673651 | orchestrator | 18:48:55.673 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-19 18:48:55.673691 | orchestrator | 18:48:55.673 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 18:48:55.673728 | orchestrator | 18:48:55.673 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.673764 | orchestrator | 18:48:55.673 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.673804 | orchestrator | 18:48:55.673 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.673836 | orchestrator | 18:48:55.673 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.673859 | orchestrator | 18:48:55.673 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.673880 | orchestrator | 18:48:55.673 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.673915 | orchestrator | 18:48:55.673 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.673950 | orchestrator | 18:48:55.673 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.673981 | orchestrator | 18:48:55.673 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 18:48:55.674005 | orchestrator | 18:48:55.673 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.674058 | orchestrator | 18:48:55.673 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.674093 | orchestrator | 18:48:55.674 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.674130 | orchestrator | 18:48:55.674 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.674154 | orchestrator | 18:48:55.674 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.674186 | orchestrator | 18:48:55.674 STDOUT terraform:  + name = "testbed-node-4" 2025-05-19 18:48:55.674211 | orchestrator | 18:48:55.674 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.674248 | orchestrator | 18:48:55.674 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.674283 | orchestrator | 18:48:55.674 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.674306 | orchestrator | 18:48:55.674 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.674341 | orchestrator | 18:48:55.674 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.674394 | orchestrator | 18:48:55.674 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 18:48:55.674403 | orchestrator | 18:48:55.674 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.674430 | orchestrator | 18:48:55.674 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.674458 | orchestrator | 18:48:55.674 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.674514 | orchestrator | 18:48:55.674 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.674542 | orchestrator | 18:48:55.674 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.674572 | orchestrator | 18:48:55.674 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.674610 | orchestrator | 18:48:55.674 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.674619 | orchestrator | 18:48:55.674 STDOUT terraform:  } 2025-05-19 18:48:55.674635 | orchestrator | 18:48:55.674 STDOUT terraform:  + network { 2025-05-19 18:48:55.674659 | orchestrator | 18:48:55.674 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.674690 | orchestrator | 18:48:55.674 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.674724 | orchestrator | 18:48:55.674 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.674755 | orchestrator | 18:48:55.674 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.674785 | orchestrator | 18:48:55.674 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.674815 | orchestrator | 18:48:55.674 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.674847 | orchestrator | 18:48:55.674 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.674855 | orchestrator | 18:48:55.674 STDOUT terraform:  } 2025-05-19 18:48:55.674863 | orchestrator | 18:48:55.674 STDOUT terraform:  } 2025-05-19 18:48:55.674911 | orchestrator | 18:48:55.674 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-19 18:48:55.674957 | orchestrator | 18:48:55.674 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 18:48:55.674992 | orchestrator | 18:48:55.674 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 18:48:55.675029 | orchestrator | 18:48:55.674 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 18:48:55.675062 | orchestrator | 18:48:55.675 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 18:48:55.675097 | orchestrator | 18:48:55.675 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.675121 | orchestrator | 18:48:55.675 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 18:48:55.675148 | orchestrator | 18:48:55.675 STDOUT terraform:  + config_drive = true 2025-05-19 18:48:55.675177 | orchestrator | 18:48:55.675 STDOUT terraform:  + created = (known after apply) 2025-05-19 18:48:55.675214 | orchestrator | 18:48:55.675 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 18:48:55.675244 | orchestrator | 18:48:55.675 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 18:48:55.675267 | orchestrator | 18:48:55.675 STDOUT terraform:  + force_delete = false 2025-05-19 18:48:55.675304 | orchestrator | 18:48:55.675 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.675340 | orchestrator | 18:48:55.675 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 18:48:55.675374 | orchestrator | 18:48:55.675 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 18:48:55.675400 | orchestrator | 18:48:55.675 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 18:48:55.675430 | orchestrator | 18:48:55.675 STDOUT terraform:  + name = "testbed-node-5" 2025-05-19 18:48:55.675456 | orchestrator | 18:48:55.675 STDOUT terraform:  + power_state = "active" 2025-05-19 18:48:55.675507 | orchestrator | 18:48:55.675 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.675539 | orchestrator | 18:48:55.675 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 18:48:55.675562 | orchestrator | 18:48:55.675 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 18:48:55.675597 | orchestrator | 18:48:55.675 STDOUT terraform:  + updated = (known after apply) 2025-05-19 18:48:55.675647 | orchestrator | 18:48:55.675 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 18:48:55.675671 | orchestrator | 18:48:55.675 STDOUT terraform:  + block_device { 2025-05-19 18:48:55.675682 | orchestrator | 18:48:55.675 STDOUT terraform:  + boot_index = 0 2025-05-19 18:48:55.675710 | orchestrator | 18:48:55.675 STDOUT terraform:  + delete_on_termination = false 2025-05-19 18:48:55.675739 | orchestrator | 18:48:55.675 STDOUT terraform:  + destination_type = "volume" 2025-05-19 18:48:55.675767 | orchestrator | 18:48:55.675 STDOUT terraform:  + multiattach = false 2025-05-19 18:48:55.675797 | orchestrator | 18:48:55.675 STDOUT terraform:  + source_type = "volume" 2025-05-19 18:48:55.675837 | orchestrator | 18:48:55.675 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.675850 | orchestrator | 18:48:55.675 STDOUT terraform:  } 2025-05-19 18:48:55.675858 | orchestrator | 18:48:55.675 STDOUT terraform:  + network { 2025-05-19 18:48:55.675876 | orchestrator | 18:48:55.675 STDOUT terraform:  + access_network = false 2025-05-19 18:48:55.675908 | orchestrator | 18:48:55.675 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 18:48:55.675938 | orchestrator | 18:48:55.675 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 18:48:55.675969 | orchestrator | 18:48:55.675 STDOUT terraform:  + mac = (known after apply) 2025-05-19 18:48:55.676012 | orchestrator | 18:48:55.675 STDOUT terraform:  + name = (known after apply) 2025-05-19 18:48:55.676033 | orchestrator | 18:48:55.675 STDOUT terraform:  + port = (known after apply) 2025-05-19 18:48:55.676065 | orchestrator | 18:48:55.676 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 18:48:55.676074 | orchestrator | 18:48:55.676 STDOUT terraform:  } 2025-05-19 18:48:55.676086 | orchestrator | 18:48:55.676 STDOUT terraform:  } 2025-05-19 18:48:55.676118 | orchestrator | 18:48:55.676 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-19 18:48:55.676153 | orchestrator | 18:48:55.676 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-19 18:48:55.676182 | orchestrator | 18:48:55.676 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-19 18:48:55.676211 | orchestrator | 18:48:55.676 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.676233 | orchestrator | 18:48:55.676 STDOUT terraform:  + name = "testbed" 2025-05-19 18:48:55.676259 | orchestrator | 18:48:55.676 STDOUT terraform:  + private_key = (sensitive value) 2025-05-19 18:48:55.676290 | orchestrator | 18:48:55.676 STDOUT terraform:  + public_key = (known after apply) 2025-05-19 18:48:55.676319 | orchestrator | 18:48:55.676 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.676347 | orchestrator | 18:48:55.676 STDOUT terraform:  + user_id = (known after apply) 2025-05-19 18:48:55.676356 | orchestrator | 18:48:55.676 STDOUT terraform:  } 2025-05-19 18:48:55.676407 | orchestrator | 18:48:55.676 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-19 18:48:55.676456 | orchestrator | 18:48:55.676 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.676526 | orchestrator | 18:48:55.676 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.676554 | orchestrator | 18:48:55.676 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.676584 | orchestrator | 18:48:55.676 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.676614 | orchestrator | 18:48:55.676 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.676643 | orchestrator | 18:48:55.676 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.676652 | orchestrator | 18:48:55.676 STDOUT terraform:  } 2025-05-19 18:48:55.676703 | orchestrator | 18:48:55.676 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-19 18:48:55.676753 | orchestrator | 18:48:55.676 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.676784 | orchestrator | 18:48:55.676 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.676814 | orchestrator | 18:48:55.676 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.676842 | orchestrator | 18:48:55.676 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.676871 | orchestrator | 18:48:55.676 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.676899 | orchestrator | 18:48:55.676 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.676907 | orchestrator | 18:48:55.676 STDOUT terraform:  } 2025-05-19 18:48:55.676960 | orchestrator | 18:48:55.676 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-19 18:48:55.677008 | orchestrator | 18:48:55.676 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.677043 | orchestrator | 18:48:55.677 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.677070 | orchestrator | 18:48:55.677 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.677095 | orchestrator | 18:48:55.677 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.677123 | orchestrator | 18:48:55.677 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.677152 | orchestrator | 18:48:55.677 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.677160 | orchestrator | 18:48:55.677 STDOUT terraform:  } 2025-05-19 18:48:55.677216 | orchestrator | 18:48:55.677 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-19 18:48:55.677266 | orchestrator | 18:48:55.677 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.677295 | orchestrator | 18:48:55.677 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.677325 | orchestrator | 18:48:55.677 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.677352 | orchestrator | 18:48:55.677 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.677381 | orchestrator | 18:48:55.677 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.677414 | orchestrator | 18:48:55.677 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.677423 | orchestrator | 18:48:55.677 STDOUT terraform:  } 2025-05-19 18:48:55.677506 | orchestrator | 18:48:55.677 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-19 18:48:55.677539 | orchestrator | 18:48:55.677 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.677570 | orchestrator | 18:48:55.677 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.677589 | orchestrator | 18:48:55.677 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.677619 | orchestrator | 18:48:55.677 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.677648 | orchestrator | 18:48:55.677 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.677678 | orchestrator | 18:48:55.677 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.677686 | orchestrator | 18:48:55.677 STDOUT terraform:  } 2025-05-19 18:48:55.677736 | orchestrator | 18:48:55.677 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-19 18:48:55.677786 | orchestrator | 18:48:55.677 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.677813 | orchestrator | 18:48:55.677 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.677842 | orchestrator | 18:48:55.677 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.677870 | orchestrator | 18:48:55.677 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.677900 | orchestrator | 18:48:55.677 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.677928 | orchestrator | 18:48:55.677 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.677936 | orchestrator | 18:48:55.677 STDOUT terraform:  } 2025-05-19 18:48:55.677987 | orchestrator | 18:48:55.677 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-19 18:48:55.678050 | orchestrator | 18:48:55.677 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.678084 | orchestrator | 18:48:55.678 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.678111 | orchestrator | 18:48:55.678 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.678140 | orchestrator | 18:48:55.678 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.678169 | orchestrator | 18:48:55.678 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.678197 | orchestrator | 18:48:55.678 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.678205 | orchestrator | 18:48:55.678 STDOUT terraform:  } 2025-05-19 18:48:55.678258 | orchestrator | 18:48:55.678 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-19 18:48:55.678305 | orchestrator | 18:48:55.678 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.678335 | orchestrator | 18:48:55.678 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.678369 | orchestrator | 18:48:55.678 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.678394 | orchestrator | 18:48:55.678 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.678420 | orchestrator | 18:48:55.678 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.678448 | orchestrator | 18:48:55.678 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.678456 | orchestrator | 18:48:55.678 STDOUT terraform:  } 2025-05-19 18:48:55.678538 | orchestrator | 18:48:55.678 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-19 18:48:55.678581 | orchestrator | 18:48:55.678 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 18:48:55.678611 | orchestrator | 18:48:55.678 STDOUT terraform:  + device = (known after apply) 2025-05-19 18:48:55.678639 | orchestrator | 18:48:55.678 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.678677 | orchestrator | 18:48:55.678 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 18:48:55.678696 | orchestrator | 18:48:55.678 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.678724 | orchestrator | 18:48:55.678 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 18:48:55.678732 | orchestrator | 18:48:55.678 STDOUT terraform:  } 2025-05-19 18:48:55.678792 | orchestrator | 18:48:55.678 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-19 18:48:55.678848 | orchestrator | 18:48:55.678 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-19 18:48:55.678877 | orchestrator | 18:48:55.678 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-19 18:48:55.678906 | orchestrator | 18:48:55.678 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-19 18:48:55.678934 | orchestrator | 18:48:55.678 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.678964 | orchestrator | 18:48:55.678 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 18:48:55.678994 | orchestrator | 18:48:55.678 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.679001 | orchestrator | 18:48:55.678 STDOUT terraform:  } 2025-05-19 18:48:55.679054 | orchestrator | 18:48:55.679 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-19 18:48:55.679104 | orchestrator | 18:48:55.679 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-19 18:48:55.679128 | orchestrator | 18:48:55.679 STDOUT terraform:  + address = (known after apply) 2025-05-19 18:48:55.679153 | orchestrator | 18:48:55.679 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.679178 | orchestrator | 18:48:55.679 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-19 18:48:55.679207 | orchestrator | 18:48:55.679 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.679233 | orchestrator | 18:48:55.679 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-19 18:48:55.679258 | orchestrator | 18:48:55.679 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.679279 | orchestrator | 18:48:55.679 STDOUT terraform:  + pool = "public" 2025-05-19 18:48:55.679304 | orchestrator | 18:48:55.679 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 18:48:55.679330 | orchestrator | 18:48:55.679 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.679355 | orchestrator | 18:48:55.679 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.679379 | orchestrator | 18:48:55.679 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.679387 | orchestrator | 18:48:55.679 STDOUT terraform:  } 2025-05-19 18:48:55.679508 | orchestrator | 18:48:55.679 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-19 18:48:55.679549 | orchestrator | 18:48:55.679 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-19 18:48:55.679586 | orchestrator | 18:48:55.679 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.679624 | orchestrator | 18:48:55.679 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.679647 | orchestrator | 18:48:55.679 STDOUT terraform:  + availability_zone_hints = [ 2025-05-19 18:48:55.679655 | orchestrator | 18:48:55.679 STDOUT terraform:  + "nova", 2025-05-19 18:48:55.679672 | orchestrator | 18:48:55.679 STDOUT terraform:  ] 2025-05-19 18:48:55.679710 | orchestrator | 18:48:55.679 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-19 18:48:55.679746 | orchestrator | 18:48:55.679 STDOUT terraform:  + external = (known after apply) 2025-05-19 18:48:55.679783 | orchestrator | 18:48:55.679 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.679821 | orchestrator | 18:48:55.679 STDOUT terraform:  + mtu = (known after apply) 2025-05-19 18:48:55.679860 | orchestrator | 18:48:55.679 STDOUT terraform:  + name = "net-testbed-management" 2025-05-19 18:48:55.679895 | orchestrator | 18:48:55.679 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.679933 | orchestrator | 18:48:55.679 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.679970 | orchestrator | 18:48:55.679 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.680008 | orchestrator | 18:48:55.679 STDOUT terraform:  + shared = (known after apply) 2025-05-19 18:48:55.680045 | orchestrator | 18:48:55.680 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.680088 | orchestrator | 18:48:55.680 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-19 18:48:55.680106 | orchestrator | 18:48:55.680 STDOUT terraform:  + segments (known after apply) 2025-05-19 18:48:55.680113 | orchestrator | 18:48:55.680 STDOUT terraform:  } 2025-05-19 18:48:55.680163 | orchestrator | 18:48:55.680 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-19 18:48:55.680209 | orchestrator | 18:48:55.680 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-19 18:48:55.680249 | orchestrator | 18:48:55.680 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.680282 | orchestrator | 18:48:55.680 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.680324 | orchestrator | 18:48:55.680 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.680356 | orchestrator | 18:48:55.680 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.680387 | orchestrator | 18:48:55.680 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.680423 | orchestrator | 18:48:55.680 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.680460 | orchestrator | 18:48:55.680 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.680555 | orchestrator | 18:48:55.680 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.680562 | orchestrator | 18:48:55.680 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.680591 | orchestrator | 18:48:55.680 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.680628 | orchestrator | 18:48:55.680 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.680663 | orchestrator | 18:48:55.680 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.680699 | orchestrator | 18:48:55.680 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.680736 | orchestrator | 18:48:55.680 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.680772 | orchestrator | 18:48:55.680 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.680808 | orchestrator | 18:48:55.680 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.680817 | orchestrator | 18:48:55.680 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.680856 | orchestrator | 18:48:55.680 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.680863 | orchestrator | 18:48:55.680 STDOUT terraform:  } 2025-05-19 18:48:55.680882 | orchestrator | 18:48:55.680 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.680914 | orchestrator | 18:48:55.680 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.680922 | orchestrator | 18:48:55.680 STDOUT terraform:  } 2025-05-19 18:48:55.680951 | orchestrator | 18:48:55.680 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.680959 | orchestrator | 18:48:55.680 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.680984 | orchestrator | 18:48:55.680 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-19 18:48:55.681014 | orchestrator | 18:48:55.680 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.681022 | orchestrator | 18:48:55.681 STDOUT terraform:  } 2025-05-19 18:48:55.681028 | orchestrator | 18:48:55.681 STDOUT terraform:  } 2025-05-19 18:48:55.681080 | orchestrator | 18:48:55.681 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-19 18:48:55.681125 | orchestrator | 18:48:55.681 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 18:48:55.681169 | orchestrator | 18:48:55.681 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.681206 | orchestrator | 18:48:55.681 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.681241 | orchestrator | 18:48:55.681 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.681277 | orchestrator | 18:48:55.681 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.681314 | orchestrator | 18:48:55.681 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.681350 | orchestrator | 18:48:55.681 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.681386 | orchestrator | 18:48:55.681 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.681422 | orchestrator | 18:48:55.681 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.681459 | orchestrator | 18:48:55.681 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.681506 | orchestrator | 18:48:55.681 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.681543 | orchestrator | 18:48:55.681 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.681579 | orchestrator | 18:48:55.681 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.681615 | orchestrator | 18:48:55.681 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.681651 | orchestrator | 18:48:55.681 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.681687 | orchestrator | 18:48:55.681 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.681723 | orchestrator | 18:48:55.681 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.681742 | orchestrator | 18:48:55.681 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.681770 | orchestrator | 18:48:55.681 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.681777 | orchestrator | 18:48:55.681 STDOUT terraform:  } 2025-05-19 18:48:55.681803 | orchestrator | 18:48:55.681 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.681833 | orchestrator | 18:48:55.681 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 18:48:55.681840 | orchestrator | 18:48:55.681 STDOUT terraform:  } 2025-05-19 18:48:55.681864 | orchestrator | 18:48:55.681 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.681892 | orchestrator | 18:48:55.681 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.681900 | orchestrator | 18:48:55.681 STDOUT terraform:  } 2025-05-19 18:48:55.681918 | orchestrator | 18:48:55.681 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.681948 | orchestrator | 18:48:55.681 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 18:48:55.681956 | orchestrator | 18:48:55.681 STDOUT terraform:  } 2025-05-19 18:48:55.681982 | orchestrator | 18:48:55.681 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.681989 | orchestrator | 18:48:55.681 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.682034 | orchestrator | 18:48:55.681 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-19 18:48:55.682062 | orchestrator | 18:48:55.682 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.682070 | orchestrator | 18:48:55.682 STDOUT terraform:  } 2025-05-19 18:48:55.682076 | orchestrator | 18:48:55.682 STDOUT terraform:  } 2025-05-19 18:48:55.682129 | orchestrator | 18:48:55.682 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-19 18:48:55.682172 | orchestrator | 18:48:55.682 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 18:48:55.682209 | orchestrator | 18:48:55.682 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.682245 | orchestrator | 18:48:55.682 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.682279 | orchestrator | 18:48:55.682 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.682316 | orchestrator | 18:48:55.682 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.682352 | orchestrator | 18:48:55.682 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.682387 | orchestrator | 18:48:55.682 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.682423 | orchestrator | 18:48:55.682 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.682459 | orchestrator | 18:48:55.682 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.682511 | orchestrator | 18:48:55.682 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.682547 | orchestrator | 18:48:55.682 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.682584 | orchestrator | 18:48:55.682 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.682619 | orchestrator | 18:48:55.682 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.682656 | orchestrator | 18:48:55.682 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.682694 | orchestrator | 18:48:55.682 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.682729 | orchestrator | 18:48:55.682 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.682765 | orchestrator | 18:48:55.682 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.682791 | orchestrator | 18:48:55.682 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.682810 | orchestrator | 18:48:55.682 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.682817 | orchestrator | 18:48:55.682 STDOUT terraform:  } 2025-05-19 18:48:55.682835 | orchestrator | 18:48:55.682 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.682866 | orchestrator | 18:48:55.682 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 18:48:55.682873 | orchestrator | 18:48:55.682 STDOUT terraform:  } 2025-05-19 18:48:55.682891 | orchestrator | 18:48:55.682 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.682926 | orchestrator | 18:48:55.682 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.682934 | orchestrator | 18:48:55.682 STDOUT terraform:  } 2025-05-19 18:48:55.682940 | orchestrator | 18:48:55.682 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.682975 | orchestrator | 18:48:55.682 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 18:48:55.682982 | orchestrator | 18:48:55.682 STDOUT terraform:  } 2025-05-19 18:48:55.683009 | orchestrator | 18:48:55.682 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.683020 | orchestrator | 18:48:55.683 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.683040 | orchestrator | 18:48:55.683 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-19 18:48:55.683070 | orchestrator | 18:48:55.683 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.683077 | orchestrator | 18:48:55.683 STDOUT terraform:  } 2025-05-19 18:48:55.683084 | orchestrator | 18:48:55.683 STDOUT terraform:  } 2025-05-19 18:48:55.683137 | orchestrator | 18:48:55.683 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-19 18:48:55.683179 | orchestrator | 18:48:55.683 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 18:48:55.683216 | orchestrator | 18:48:55.683 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.683253 | orchestrator | 18:48:55.683 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.683289 | orchestrator | 18:48:55.683 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.683325 | orchestrator | 18:48:55.683 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.683361 | orchestrator | 18:48:55.683 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.683398 | orchestrator | 18:48:55.683 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.683434 | orchestrator | 18:48:55.683 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.683478 | orchestrator | 18:48:55.683 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.683530 | orchestrator | 18:48:55.683 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.683566 | orchestrator | 18:48:55.683 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.683603 | orchestrator | 18:48:55.683 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.683642 | orchestrator | 18:48:55.683 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.683680 | orchestrator | 18:48:55.683 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.683711 | orchestrator | 18:48:55.683 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.683746 | orchestrator | 18:48:55.683 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.683782 | orchestrator | 18:48:55.683 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.683807 | orchestrator | 18:48:55.683 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.683831 | orchestrator | 18:48:55.683 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.683838 | orchestrator | 18:48:55.683 STDOUT terraform:  } 2025-05-19 18:48:55.683863 | orchestrator | 18:48:55.683 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.683892 | orchestrator | 18:48:55.683 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 18:48:55.683899 | orchestrator | 18:48:55.683 STDOUT terraform:  } 2025-05-19 18:48:55.683910 | orchestrator | 18:48:55.683 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.683946 | orchestrator | 18:48:55.683 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.683954 | orchestrator | 18:48:55.683 STDOUT terraform:  } 2025-05-19 18:48:55.683972 | orchestrator | 18:48:55.683 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.684001 | orchestrator | 18:48:55.683 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 18:48:55.684008 | orchestrator | 18:48:55.683 STDOUT terraform:  } 2025-05-19 18:48:55.684034 | orchestrator | 18:48:55.684 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.684041 | orchestrator | 18:48:55.684 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.684070 | orchestrator | 18:48:55.684 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-19 18:48:55.684100 | orchestrator | 18:48:55.684 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.684108 | orchestrator | 18:48:55.684 STDOUT terraform:  } 2025-05-19 18:48:55.684114 | orchestrator | 18:48:55.684 STDOUT terraform:  } 2025-05-19 18:48:55.684166 | orchestrator | 18:48:55.684 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-19 18:48:55.684212 | orchestrator | 18:48:55.684 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 18:48:55.684252 | orchestrator | 18:48:55.684 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.684285 | orchestrator | 18:48:55.684 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.684320 | orchestrator | 18:48:55.684 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.684357 | orchestrator | 18:48:55.684 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.684393 | orchestrator | 18:48:55.684 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.684429 | orchestrator | 18:48:55.684 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.684465 | orchestrator | 18:48:55.684 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.684545 | orchestrator | 18:48:55.684 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.684581 | orchestrator | 18:48:55.684 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.684623 | orchestrator | 18:48:55.684 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.684660 | orchestrator | 18:48:55.684 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.684695 | orchestrator | 18:48:55.684 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.684731 | orchestrator | 18:48:55.684 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.684767 | orchestrator | 18:48:55.684 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.684804 | orchestrator | 18:48:55.684 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.684840 | orchestrator | 18:48:55.684 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.684853 | orchestrator | 18:48:55.684 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.684884 | orchestrator | 18:48:55.684 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.684891 | orchestrator | 18:48:55.684 STDOUT terraform:  } 2025-05-19 18:48:55.684915 | orchestrator | 18:48:55.684 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.684945 | orchestrator | 18:48:55.684 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 18:48:55.684952 | orchestrator | 18:48:55.684 STDOUT terraform:  } 2025-05-19 18:48:55.684979 | orchestrator | 18:48:55.684 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.685009 | orchestrator | 18:48:55.684 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.685016 | orchestrator | 18:48:55.685 STDOUT terraform:  } 2025-05-19 18:48:55.685038 | orchestrator | 18:48:55.685 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.685067 | orchestrator | 18:48:55.685 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 18:48:55.685074 | orchestrator | 18:48:55.685 STDOUT terraform:  } 2025-05-19 18:48:55.685100 | orchestrator | 18:48:55.685 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.685107 | orchestrator | 18:48:55.685 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.685137 | orchestrator | 18:48:55.685 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-19 18:48:55.685167 | orchestrator | 18:48:55.685 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.685174 | orchestrator | 18:48:55.685 STDOUT terraform:  } 2025-05-19 18:48:55.685181 | orchestrator | 18:48:55.685 STDOUT terraform:  } 2025-05-19 18:48:55.685233 | orchestrator | 18:48:55.685 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-19 18:48:55.685279 | orchestrator | 18:48:55.685 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 18:48:55.685315 | orchestrator | 18:48:55.685 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.685351 | orchestrator | 18:48:55.685 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.685386 | orchestrator | 18:48:55.685 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.685422 | orchestrator | 18:48:55.685 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.685458 | orchestrator | 18:48:55.685 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.685508 | orchestrator | 18:48:55.685 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.685543 | orchestrator | 18:48:55.685 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.685580 | orchestrator | 18:48:55.685 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.685617 | orchestrator | 18:48:55.685 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.685653 | orchestrator | 18:48:55.685 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.685689 | orchestrator | 18:48:55.685 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.685724 | orchestrator | 18:48:55.685 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.685761 | orchestrator | 18:48:55.685 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.685797 | orchestrator | 18:48:55.685 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.685833 | orchestrator | 18:48:55.685 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.685870 | orchestrator | 18:48:55.685 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.685887 | orchestrator | 18:48:55.685 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.685917 | orchestrator | 18:48:55.685 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.685924 | orchestrator | 18:48:55.685 STDOUT terraform:  } 2025-05-19 18:48:55.685951 | orchestrator | 18:48:55.685 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.685976 | orchestrator | 18:48:55.685 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 18:48:55.685984 | orchestrator | 18:48:55.685 STDOUT terraform:  } 2025-05-19 18:48:55.686001 | orchestrator | 18:48:55.685 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.686049 | orchestrator | 18:48:55.685 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.686057 | orchestrator | 18:48:55.686 STDOUT terraform:  } 2025-05-19 18:48:55.686076 | orchestrator | 18:48:55.686 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.686103 | orchestrator | 18:48:55.686 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 18:48:55.686109 | orchestrator | 18:48:55.686 STDOUT terraform:  } 2025-05-19 18:48:55.686138 | orchestrator | 18:48:55.686 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.686145 | orchestrator | 18:48:55.686 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.686176 | orchestrator | 18:48:55.686 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-19 18:48:55.686206 | orchestrator | 18:48:55.686 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.686213 | orchestrator | 18:48:55.686 STDOUT terraform:  } 2025-05-19 18:48:55.686219 | orchestrator | 18:48:55.686 STDOUT terraform:  } 2025-05-19 18:48:55.686273 | orchestrator | 18:48:55.686 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-19 18:48:55.686319 | orchestrator | 18:48:55.686 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 18:48:55.686355 | orchestrator | 18:48:55.686 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.686391 | orchestrator | 18:48:55.686 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 18:48:55.686427 | orchestrator | 18:48:55.686 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 18:48:55.686464 | orchestrator | 18:48:55.686 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.686522 | orchestrator | 18:48:55.686 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 18:48:55.686560 | orchestrator | 18:48:55.686 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 18:48:55.686599 | orchestrator | 18:48:55.686 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 18:48:55.686633 | orchestrator | 18:48:55.686 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 18:48:55.686671 | orchestrator | 18:48:55.686 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.686707 | orchestrator | 18:48:55.686 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 18:48:55.686744 | orchestrator | 18:48:55.686 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.686779 | orchestrator | 18:48:55.686 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 18:48:55.686815 | orchestrator | 18:48:55.686 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 18:48:55.686851 | orchestrator | 18:48:55.686 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.686880 | orchestrator | 18:48:55.686 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 18:48:55.686918 | orchestrator | 18:48:55.686 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.686938 | orchestrator | 18:48:55.686 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.686968 | orchestrator | 18:48:55.686 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 18:48:55.686982 | orchestrator | 18:48:55.686 STDOUT terraform:  } 2025-05-19 18:48:55.687002 | orchestrator | 18:48:55.686 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.687033 | orchestrator | 18:48:55.687 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 18:48:55.687047 | orchestrator | 18:48:55.687 STDOUT terraform:  } 2025-05-19 18:48:55.687068 | orchestrator | 18:48:55.687 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.687096 | orchestrator | 18:48:55.687 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 18:48:55.687110 | orchestrator | 18:48:55.687 STDOUT terraform:  } 2025-05-19 18:48:55.687132 | orchestrator | 18:48:55.687 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 18:48:55.687160 | orchestrator | 18:48:55.687 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 18:48:55.687176 | orchestrator | 18:48:55.687 STDOUT terraform:  } 2025-05-19 18:48:55.687200 | orchestrator | 18:48:55.687 STDOUT terraform:  + binding (known after apply) 2025-05-19 18:48:55.687215 | orchestrator | 18:48:55.687 STDOUT terraform:  + fixed_ip { 2025-05-19 18:48:55.687240 | orchestrator | 18:48:55.687 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-19 18:48:55.687270 | orchestrator | 18:48:55.687 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.687284 | orchestrator | 18:48:55.687 STDOUT terraform:  } 2025-05-19 18:48:55.687298 | orchestrator | 18:48:55.687 STDOUT terraform:  } 2025-05-19 18:48:55.687348 | orchestrator | 18:48:55.687 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-19 18:48:55.687398 | orchestrator | 18:48:55.687 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-19 18:48:55.687418 | orchestrator | 18:48:55.687 STDOUT terraform:  + force_destroy = false 2025-05-19 18:48:55.687449 | orchestrator | 18:48:55.687 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.687502 | orchestrator | 18:48:55.687 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 18:48:55.687520 | orchestrator | 18:48:55.687 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.687549 | orchestrator | 18:48:55.687 STDOUT terraform:  + router_id = (known after apply) 2025-05-19 18:48:55.687578 | orchestrator | 18:48:55.687 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 18:48:55.687585 | orchestrator | 18:48:55.687 STDOUT terraform:  } 2025-05-19 18:48:55.687624 | orchestrator | 18:48:55.687 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-19 18:48:55.687660 | orchestrator | 18:48:55.687 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-19 18:48:55.687700 | orchestrator | 18:48:55.687 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 18:48:55.687734 | orchestrator | 18:48:55.687 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.687758 | orchestrator | 18:48:55.687 STDOUT terraform:  + availability_zone_hints = [ 2025-05-19 18:48:55.687773 | orchestrator | 18:48:55.687 STDOUT terraform:  + "nova", 2025-05-19 18:48:55.687787 | orchestrator | 18:48:55.687 STDOUT terraform:  ] 2025-05-19 18:48:55.687824 | orchestrator | 18:48:55.687 STDOUT terraform:  + distributed = (known after apply) 2025-05-19 18:48:55.687860 | orchestrator | 18:48:55.687 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-19 18:48:55.687910 | orchestrator | 18:48:55.687 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-19 18:48:55.687947 | orchestrator | 18:48:55.687 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.687977 | orchestrator | 18:48:55.687 STDOUT terraform:  + name = "testbed" 2025-05-19 18:48:55.688014 | orchestrator | 18:48:55.687 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.688050 | orchestrator | 18:48:55.688 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.688080 | orchestrator | 18:48:55.688 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-19 18:48:55.688094 | orchestrator | 18:48:55.688 STDOUT terraform:  } 2025-05-19 18:48:55.688147 | orchestrator | 18:48:55.688 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-19 18:48:55.688200 | orchestrator | 18:48:55.688 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-19 18:48:55.688222 | orchestrator | 18:48:55.688 STDOUT terraform:  + description = "ssh" 2025-05-19 18:48:55.688247 | orchestrator | 18:48:55.688 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.688268 | orchestrator | 18:48:55.688 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.688299 | orchestrator | 18:48:55.688 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.688319 | orchestrator | 18:48:55.688 STDOUT terraform:  + port_range_max = 22 2025-05-19 18:48:55.688339 | orchestrator | 18:48:55.688 STDOUT terraform:  + port_range_min = 22 2025-05-19 18:48:55.688361 | orchestrator | 18:48:55.688 STDOUT terraform:  + protocol = "tcp" 2025-05-19 18:48:55.688392 | orchestrator | 18:48:55.688 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.688422 | orchestrator | 18:48:55.688 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.688446 | orchestrator | 18:48:55.688 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.688542 | orchestrator | 18:48:55.688 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.689286 | orchestrator | 18:48:55.688 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.689305 | orchestrator | 18:48:55.689 STDOUT terraform:  } 2025-05-19 18:48:55.689362 | orchestrator | 18:48:55.689 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-19 18:48:55.689416 | orchestrator | 18:48:55.689 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-19 18:48:55.689443 | orchestrator | 18:48:55.689 STDOUT terraform:  + description = "wireguard" 2025-05-19 18:48:55.689481 | orchestrator | 18:48:55.689 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.689528 | orchestrator | 18:48:55.689 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.689562 | orchestrator | 18:48:55.689 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.689583 | orchestrator | 18:48:55.689 STDOUT terraform:  + port_range_max = 51820 2025-05-19 18:48:55.689604 | orchestrator | 18:48:55.689 STDOUT terraform:  + port_range_min = 51820 2025-05-19 18:48:55.689626 | orchestrator | 18:48:55.689 STDOUT terraform:  + protocol = "udp" 2025-05-19 18:48:55.689656 | orchestrator | 18:48:55.689 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.689735 | orchestrator | 18:48:55.689 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.689761 | orchestrator | 18:48:55.689 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.689791 | orchestrator | 18:48:55.689 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.689822 | orchestrator | 18:48:55.689 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.689836 | orchestrator | 18:48:55.689 STDOUT terraform:  } 2025-05-19 18:48:55.689889 | orchestrator | 18:48:55.689 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-19 18:48:55.690642 | orchestrator | 18:48:55.689 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-19 18:48:55.690778 | orchestrator | 18:48:55.690 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.690786 | orchestrator | 18:48:55.690 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.690791 | orchestrator | 18:48:55.690 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.690795 | orchestrator | 18:48:55.690 STDOUT terraform:  + protocol = "tcp" 2025-05-19 18:48:55.690799 | orchestrator | 18:48:55.690 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.690813 | orchestrator | 18:48:55.690 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.690817 | orchestrator | 18:48:55.690 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-19 18:48:55.690821 | orchestrator | 18:48:55.690 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.690825 | orchestrator | 18:48:55.690 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.690829 | orchestrator | 18:48:55.690 STDOUT terraform:  } 2025-05-19 18:48:55.690833 | orchestrator | 18:48:55.690 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-19 18:48:55.690837 | orchestrator | 18:48:55.690 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-19 18:48:55.690841 | orchestrator | 18:48:55.690 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.690845 | orchestrator | 18:48:55.690 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.690849 | orchestrator | 18:48:55.690 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.690858 | orchestrator | 18:48:55.690 STDOUT terraform:  + protocol = "udp" 2025-05-19 18:48:55.690869 | orchestrator | 18:48:55.690 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.690873 | orchestrator | 18:48:55.690 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.690877 | orchestrator | 18:48:55.690 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-19 18:48:55.690881 | orchestrator | 18:48:55.690 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.690885 | orchestrator | 18:48:55.690 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.690889 | orchestrator | 18:48:55.690 STDOUT terraform:  } 2025-05-19 18:48:55.690892 | orchestrator | 18:48:55.690 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-19 18:48:55.690896 | orchestrator | 18:48:55.690 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-19 18:48:55.690900 | orchestrator | 18:48:55.690 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.690904 | orchestrator | 18:48:55.690 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.690909 | orchestrator | 18:48:55.690 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.690915 | orchestrator | 18:48:55.690 STDOUT terraform:  + protocol = "icmp" 2025-05-19 18:48:55.691435 | orchestrator | 18:48:55.690 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.691445 | orchestrator | 18:48:55.690 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.691449 | orchestrator | 18:48:55.690 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.691453 | orchestrator | 18:48:55.690 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.691457 | orchestrator | 18:48:55.691 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.691461 | orchestrator | 18:48:55.691 STDOUT terraform:  } 2025-05-19 18:48:55.691465 | orchestrator | 18:48:55.691 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-19 18:48:55.691497 | orchestrator | 18:48:55.691 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-19 18:48:55.691501 | orchestrator | 18:48:55.691 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.691505 | orchestrator | 18:48:55.691 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.691508 | orchestrator | 18:48:55.691 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.691512 | orchestrator | 18:48:55.691 STDOUT terraform:  + protocol = "tcp" 2025-05-19 18:48:55.691516 | orchestrator | 18:48:55.691 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.691520 | orchestrator | 18:48:55.691 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.691523 | orchestrator | 18:48:55.691 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.691527 | orchestrator | 18:48:55.691 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.691531 | orchestrator | 18:48:55.691 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.691534 | orchestrator | 18:48:55.691 STDOUT terraform:  } 2025-05-19 18:48:55.691538 | orchestrator | 18:48:55.691 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-19 18:48:55.691545 | orchestrator | 18:48:55.691 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-19 18:48:55.691549 | orchestrator | 18:48:55.691 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.691553 | orchestrator | 18:48:55.691 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.691556 | orchestrator | 18:48:55.691 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.691562 | orchestrator | 18:48:55.691 STDOUT terraform:  + protocol = "udp" 2025-05-19 18:48:55.691624 | orchestrator | 18:48:55.691 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.691656 | orchestrator | 18:48:55.691 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.691701 | orchestrator | 18:48:55.691 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.691739 | orchestrator | 18:48:55.691 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.691774 | orchestrator | 18:48:55.691 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.691780 | orchestrator | 18:48:55.691 STDOUT terraform:  } 2025-05-19 18:48:55.691842 | orchestrator | 18:48:55.691 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-19 18:48:55.691901 | orchestrator | 18:48:55.691 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-19 18:48:55.691934 | orchestrator | 18:48:55.691 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.691957 | orchestrator | 18:48:55.691 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.691993 | orchestrator | 18:48:55.691 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.692015 | orchestrator | 18:48:55.691 STDOUT terraform:  + protocol = "icmp" 2025-05-19 18:48:55.692045 | orchestrator | 18:48:55.692 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.692081 | orchestrator | 18:48:55.692 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.692105 | orchestrator | 18:48:55.692 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.692135 | orchestrator | 18:48:55.692 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.692165 | orchestrator | 18:48:55.692 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.692180 | orchestrator | 18:48:55.692 STDOUT terraform:  } 2025-05-19 18:48:55.692230 | orchestrator | 18:48:55.692 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-19 18:48:55.692288 | orchestrator | 18:48:55.692 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-19 18:48:55.692301 | orchestrator | 18:48:55.692 STDOUT terraform:  + description = "vrrp" 2025-05-19 18:48:55.692326 | orchestrator | 18:48:55.692 STDOUT terraform:  + direction = "ingress" 2025-05-19 18:48:55.692347 | orchestrator | 18:48:55.692 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 18:48:55.692379 | orchestrator | 18:48:55.692 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.692399 | orchestrator | 18:48:55.692 STDOUT terraform:  + protocol = "112" 2025-05-19 18:48:55.692432 | orchestrator | 18:48:55.692 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.692462 | orchestrator | 18:48:55.692 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 18:48:55.692499 | orchestrator | 18:48:55.692 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 18:48:55.692524 | orchestrator | 18:48:55.692 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 18:48:55.692554 | orchestrator | 18:48:55.692 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.692569 | orchestrator | 18:48:55.692 STDOUT terraform:  } 2025-05-19 18:48:55.692618 | orchestrator | 18:48:55.692 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-19 18:48:55.692667 | orchestrator | 18:48:55.692 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-19 18:48:55.692696 | orchestrator | 18:48:55.692 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.692732 | orchestrator | 18:48:55.692 STDOUT terraform:  + description = "management security group" 2025-05-19 18:48:55.692762 | orchestrator | 18:48:55.692 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.692790 | orchestrator | 18:48:55.692 STDOUT terraform:  + name = "testbed-management" 2025-05-19 18:48:55.692819 | orchestrator | 18:48:55.692 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.692848 | orchestrator | 18:48:55.692 STDOUT terraform:  + stateful = (known after apply) 2025-05-19 18:48:55.692877 | orchestrator | 18:48:55.692 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.692890 | orchestrator | 18:48:55.692 STDOUT terraform:  } 2025-05-19 18:48:55.692938 | orchestrator | 18:48:55.692 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-19 18:48:55.692985 | orchestrator | 18:48:55.692 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-19 18:48:55.693013 | orchestrator | 18:48:55.692 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.693045 | orchestrator | 18:48:55.693 STDOUT terraform:  + description = "node security group" 2025-05-19 18:48:55.693073 | orchestrator | 18:48:55.693 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.693097 | orchestrator | 18:48:55.693 STDOUT terraform:  + name = "testbed-node" 2025-05-19 18:48:55.693125 | orchestrator | 18:48:55.693 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.693154 | orchestrator | 18:48:55.693 STDOUT terraform:  + stateful = (known after apply) 2025-05-19 18:48:55.693182 | orchestrator | 18:48:55.693 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.693196 | orchestrator | 18:48:55.693 STDOUT terraform:  } 2025-05-19 18:48:55.693243 | orchestrator | 18:48:55.693 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-19 18:48:55.693285 | orchestrator | 18:48:55.693 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-19 18:48:55.693316 | orchestrator | 18:48:55.693 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 18:48:55.693346 | orchestrator | 18:48:55.693 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-19 18:48:55.693367 | orchestrator | 18:48:55.693 STDOUT terraform:  + dns_nameservers = [ 2025-05-19 18:48:55.693385 | orchestrator | 18:48:55.693 STDOUT terraform:  + "8.8.8.8", 2025-05-19 18:48:55.693402 | orchestrator | 18:48:55.693 STDOUT terraform:  + "9.9.9.9", 2025-05-19 18:48:55.693418 | orchestrator | 18:48:55.693 STDOUT terraform:  ] 2025-05-19 18:48:55.693438 | orchestrator | 18:48:55.693 STDOUT terraform:  + enable_dhcp = true 2025-05-19 18:48:55.693494 | orchestrator | 18:48:55.693 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-19 18:48:55.693529 | orchestrator | 18:48:55.693 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.693549 | orchestrator | 18:48:55.693 STDOUT terraform:  + ip_version = 4 2025-05-19 18:48:55.693580 | orchestrator | 18:48:55.693 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-19 18:48:55.693611 | orchestrator | 18:48:55.693 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-19 18:48:55.693649 | orchestrator | 18:48:55.693 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-19 18:48:55.693684 | orchestrator | 18:48:55.693 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 18:48:55.693708 | orchestrator | 18:48:55.693 STDOUT terraform:  + no_gateway = false 2025-05-19 18:48:55.693734 | orchestrator | 18:48:55.693 STDOUT terraform:  + region = (known after apply) 2025-05-19 18:48:55.693765 | orchestrator | 18:48:55.693 STDOUT terraform:  + service_types = (known after apply) 2025-05-19 18:48:55.693795 | orchestrator | 18:48:55.693 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 18:48:55.693807 | orchestrator | 18:48:55.693 STDOUT terraform:  + allocation_pool { 2025-05-19 18:48:55.693838 | orchestrator | 18:48:55.693 STDOUT terraform:  + end = "192.168.31.250" 2025-05-19 18:48:55.693862 | orchestrator | 18:48:55.693 STDOUT terraform:  + start = "192.168.31.200" 2025-05-19 18:48:55.693868 | orchestrator | 18:48:55.693 STDOUT terraform:  } 2025-05-19 18:48:55.693884 | orchestrator | 18:48:55.693 STDOUT terraform:  } 2025-05-19 18:48:55.693907 | orchestrator | 18:48:55.693 STDOUT terraform:  # terraform_data.image will be created 2025-05-19 18:48:55.693932 | orchestrator | 18:48:55.693 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-19 18:48:55.693956 | orchestrator | 18:48:55.693 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.693973 | orchestrator | 18:48:55.693 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-19 18:48:55.693997 | orchestrator | 18:48:55.693 STDOUT terraform:  + output = (known after apply) 2025-05-19 18:48:55.694004 | orchestrator | 18:48:55.693 STDOUT terraform:  } 2025-05-19 18:48:55.694057 | orchestrator | 18:48:55.694 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-19 18:48:55.694087 | orchestrator | 18:48:55.694 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-19 18:48:55.694112 | orchestrator | 18:48:55.694 STDOUT terraform:  + id = (known after apply) 2025-05-19 18:48:55.694128 | orchestrator | 18:48:55.694 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-19 18:48:55.694156 | orchestrator | 18:48:55.694 STDOUT terraform:  + output = (known after apply) 2025-05-19 18:48:55.694162 | orchestrator | 18:48:55.694 STDOUT terraform:  } 2025-05-19 18:48:55.694197 | orchestrator | 18:48:55.694 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-19 18:48:55.694203 | orchestrator | 18:48:55.694 STDOUT terraform: Changes to Outputs: 2025-05-19 18:48:55.694233 | orchestrator | 18:48:55.694 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-19 18:48:55.694259 | orchestrator | 18:48:55.694 STDOUT terraform:  + private_key = (sensitive value) 2025-05-19 18:48:55.763083 | orchestrator | 18:48:55.762 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-19 18:48:55.910221 | orchestrator | 18:48:55.909 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=25b828f8-e77e-b40b-7825-3ca6db805824] 2025-05-19 18:48:55.910313 | orchestrator | 18:48:55.910 STDOUT terraform: terraform_data.image: Creating... 2025-05-19 18:48:55.911024 | orchestrator | 18:48:55.910 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=0176a6de-8d34-6276-c746-a62f9ef0feae] 2025-05-19 18:48:55.926082 | orchestrator | 18:48:55.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-19 18:48:55.926889 | orchestrator | 18:48:55.926 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-19 18:48:55.927127 | orchestrator | 18:48:55.927 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-19 18:48:55.932986 | orchestrator | 18:48:55.932 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-19 18:48:55.933595 | orchestrator | 18:48:55.933 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-19 18:48:55.938304 | orchestrator | 18:48:55.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-19 18:48:55.938357 | orchestrator | 18:48:55.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-19 18:48:55.938363 | orchestrator | 18:48:55.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-19 18:48:55.938368 | orchestrator | 18:48:55.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-19 18:48:55.938373 | orchestrator | 18:48:55.938 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-19 18:48:56.380209 | orchestrator | 18:48:56.379 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-19 18:48:56.385274 | orchestrator | 18:48:56.384 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-19 18:48:56.385682 | orchestrator | 18:48:56.385 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-19 18:48:56.390203 | orchestrator | 18:48:56.390 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-19 18:48:56.501679 | orchestrator | 18:48:56.501 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-19 18:48:56.509117 | orchestrator | 18:48:56.508 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-19 18:49:01.887334 | orchestrator | 18:49:01.887 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=7ec0a792-cd8a-472e-9a0d-1e81898a885d] 2025-05-19 18:49:01.894690 | orchestrator | 18:49:01.894 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-19 18:49:05.929822 | orchestrator | 18:49:05.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-19 18:49:05.935911 | orchestrator | 18:49:05.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-19 18:49:05.936056 | orchestrator | 18:49:05.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-19 18:49:05.936921 | orchestrator | 18:49:05.936 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-19 18:49:05.937162 | orchestrator | 18:49:05.936 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-19 18:49:05.937428 | orchestrator | 18:49:05.937 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-19 18:49:06.386949 | orchestrator | 18:49:06.386 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-19 18:49:06.391048 | orchestrator | 18:49:06.390 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-19 18:49:06.496904 | orchestrator | 18:49:06.495 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=cefbdaf0-1f4e-46ad-9d0a-02354cb171be] 2025-05-19 18:49:06.498636 | orchestrator | 18:49:06.498 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=f14fc737-7fc7-4300-a12c-0d45556a294d] 2025-05-19 18:49:06.502278 | orchestrator | 18:49:06.502 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-19 18:49:06.509577 | orchestrator | 18:49:06.509 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-19 18:49:06.509665 | orchestrator | 18:49:06.509 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-19 18:49:06.521333 | orchestrator | 18:49:06.521 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=75dd3d3f-610d-4410-ad7d-41af206bb5b3] 2025-05-19 18:49:06.526453 | orchestrator | 18:49:06.526 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-19 18:49:06.549002 | orchestrator | 18:49:06.548 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=ccb5460a-d35b-438c-9adb-1ec03f5b0ca2] 2025-05-19 18:49:06.550671 | orchestrator | 18:49:06.550 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=69146676-2ac4-45fa-96a7-ebd6f82ff2f3] 2025-05-19 18:49:06.552175 | orchestrator | 18:49:06.551 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=d327778e-2231-4334-9e4b-af08a803eb53] 2025-05-19 18:49:06.557898 | orchestrator | 18:49:06.557 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-19 18:49:06.561218 | orchestrator | 18:49:06.561 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-19 18:49:06.561388 | orchestrator | 18:49:06.561 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-19 18:49:06.598779 | orchestrator | 18:49:06.598 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=cc8857f4-0920-4071-aa29-561fcd5ac091] 2025-05-19 18:49:06.608454 | orchestrator | 18:49:06.608 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=4a1dc982-c7ec-4970-a1b2-e96be6dbc199] 2025-05-19 18:49:06.614962 | orchestrator | 18:49:06.614 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-19 18:49:06.618771 | orchestrator | 18:49:06.618 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=bf58f9af6f503cebb28eeaed2d0f9fdb2cf6af6c] 2025-05-19 18:49:06.627544 | orchestrator | 18:49:06.627 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-19 18:49:06.629276 | orchestrator | 18:49:06.629 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-19 18:49:06.631041 | orchestrator | 18:49:06.630 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=29ced5b42cf17b7d5d73d718af35820c63924c08] 2025-05-19 18:49:06.697529 | orchestrator | 18:49:06.697 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=61384220-7968-49f8-abf1-ef218bf9da20] 2025-05-19 18:49:11.898323 | orchestrator | 18:49:11.897 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-19 18:49:12.302393 | orchestrator | 18:49:12.302 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=4d43deee-0972-4c02-80df-437cbd2714e2] 2025-05-19 18:49:12.811673 | orchestrator | 18:49:12.811 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=b8bbbe0d-b50a-4dd4-ad7c-f120d4d2c463] 2025-05-19 18:49:12.821902 | orchestrator | 18:49:12.821 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-19 18:49:16.504097 | orchestrator | 18:49:16.503 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-19 18:49:16.509168 | orchestrator | 18:49:16.508 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-19 18:49:16.527550 | orchestrator | 18:49:16.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-19 18:49:16.558972 | orchestrator | 18:49:16.558 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-19 18:49:16.562168 | orchestrator | 18:49:16.561 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-19 18:49:16.562491 | orchestrator | 18:49:16.562 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-19 18:49:16.845853 | orchestrator | 18:49:16.845 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=d5f9cc4b-8b29-4481-945d-7cb76299c28b] 2025-05-19 18:49:16.876876 | orchestrator | 18:49:16.876 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=49c2c95e-ca71-42b4-aa69-7630ee3c63b4] 2025-05-19 18:49:16.906587 | orchestrator | 18:49:16.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d] 2025-05-19 18:49:16.907160 | orchestrator | 18:49:16.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=343e5b57-eba5-4b83-86e1-b9250508edd4] 2025-05-19 18:49:16.936250 | orchestrator | 18:49:16.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=bd4a323c-070b-40ce-9313-87b44bb33677] 2025-05-19 18:49:16.968835 | orchestrator | 18:49:16.968 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=b10fa023-6143-4ca1-b9f5-0c52162081c1] 2025-05-19 18:49:20.696838 | orchestrator | 18:49:20.696 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=cbddf006-d1af-4a08-8ac2-9f4982e7c1b3] 2025-05-19 18:49:20.702732 | orchestrator | 18:49:20.702 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-19 18:49:20.702799 | orchestrator | 18:49:20.702 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-19 18:49:20.704133 | orchestrator | 18:49:20.703 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-19 18:49:20.858765 | orchestrator | 18:49:20.858 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=be3e1164-c715-4c65-ab7a-4a8bc4d2cece] 2025-05-19 18:49:20.871547 | orchestrator | 18:49:20.871 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-19 18:49:20.874351 | orchestrator | 18:49:20.874 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-19 18:49:20.879857 | orchestrator | 18:49:20.879 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-19 18:49:20.885125 | orchestrator | 18:49:20.885 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-19 18:49:20.888514 | orchestrator | 18:49:20.888 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-19 18:49:20.888962 | orchestrator | 18:49:20.888 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=07102c91-6da8-43df-b13d-c57008b1d55d] 2025-05-19 18:49:20.890063 | orchestrator | 18:49:20.889 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-19 18:49:20.892861 | orchestrator | 18:49:20.892 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-19 18:49:20.893245 | orchestrator | 18:49:20.893 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-19 18:49:20.901594 | orchestrator | 18:49:20.901 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-19 18:49:21.024402 | orchestrator | 18:49:21.023 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=1ec2c606-1cc4-45c6-a99c-50f00e8fc9c8] 2025-05-19 18:49:21.044908 | orchestrator | 18:49:21.044 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-19 18:49:21.195326 | orchestrator | 18:49:21.194 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=dd3087b2-9c3a-4976-ac57-d8ab132e6b1e] 2025-05-19 18:49:21.202873 | orchestrator | 18:49:21.202 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-19 18:49:21.441902 | orchestrator | 18:49:21.441 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=862e7959-5f70-4383-9023-c9f21ebd317a] 2025-05-19 18:49:21.449866 | orchestrator | 18:49:21.449 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=4e2a3d92-ca55-45cc-9251-13a5a09de188] 2025-05-19 18:49:21.454130 | orchestrator | 18:49:21.453 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-19 18:49:21.455966 | orchestrator | 18:49:21.455 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-19 18:49:21.618956 | orchestrator | 18:49:21.618 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=ea0606d1-cebd-482c-a551-0eb452553244] 2025-05-19 18:49:21.627761 | orchestrator | 18:49:21.627 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-19 18:49:21.933137 | orchestrator | 18:49:21.932 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=2f3a3f27-59d6-4223-888a-b104ddc52181] 2025-05-19 18:49:21.941374 | orchestrator | 18:49:21.941 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-19 18:49:22.082615 | orchestrator | 18:49:22.082 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=cb0051e7-6e23-470e-ab12-1d946f5ff913] 2025-05-19 18:49:22.100037 | orchestrator | 18:49:22.099 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-19 18:49:22.228604 | orchestrator | 18:49:22.228 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a94e8d15-54ef-4d40-8503-c3b5826f9c63] 2025-05-19 18:49:22.365333 | orchestrator | 18:49:22.364 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=5975ed79-7857-4176-a8f5-3acfa87c8ff8] 2025-05-19 18:49:26.491240 | orchestrator | 18:49:26.490 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=4ce3a513-5ebc-46b9-aab1-e7645316e7ac] 2025-05-19 18:49:26.514882 | orchestrator | 18:49:26.514 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=6b11645d-6d43-459a-8103-4ebc61e053cb] 2025-05-19 18:49:26.519786 | orchestrator | 18:49:26.519 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=b914f48d-ceba-4087-80db-371a156df794] 2025-05-19 18:49:26.594494 | orchestrator | 18:49:26.594 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=66fd0c61-ec1a-4df2-9e40-3166759c40f5] 2025-05-19 18:49:26.677634 | orchestrator | 18:49:26.677 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=059301bf-5ab7-4a03-acef-0aef8cd20057] 2025-05-19 18:49:26.735186 | orchestrator | 18:49:26.734 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=94b10b4a-c5f6-4f5e-8d0e-62b6ad505056] 2025-05-19 18:49:27.575413 | orchestrator | 18:49:27.575 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=1b25cc5b-40fb-44ee-8f6a-e477580e3cb5] 2025-05-19 18:49:28.320351 | orchestrator | 18:49:28.319 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=89821c94-acf3-477a-a0c1-fa5c821dd4cd] 2025-05-19 18:49:28.335249 | orchestrator | 18:49:28.335 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-19 18:49:28.355703 | orchestrator | 18:49:28.355 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-19 18:49:28.355948 | orchestrator | 18:49:28.355 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-19 18:49:28.362730 | orchestrator | 18:49:28.362 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-19 18:49:28.370821 | orchestrator | 18:49:28.370 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-19 18:49:28.372198 | orchestrator | 18:49:28.372 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-19 18:49:28.375558 | orchestrator | 18:49:28.375 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-19 18:49:34.606225 | orchestrator | 18:49:34.604 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=3e72c5dc-d05e-4dc8-b670-979ce8427230] 2025-05-19 18:49:34.621169 | orchestrator | 18:49:34.620 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-19 18:49:34.625287 | orchestrator | 18:49:34.625 STDOUT terraform: local_file.inventory: Creating... 2025-05-19 18:49:34.627249 | orchestrator | 18:49:34.627 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-19 18:49:34.631994 | orchestrator | 18:49:34.631 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=0069d6c1d30dc3ecdc660c66183962182acfa905] 2025-05-19 18:49:34.632757 | orchestrator | 18:49:34.632 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fdab5ace13021d2cce4c5d1d076004c4cd4e4974] 2025-05-19 18:49:35.351004 | orchestrator | 18:49:35.350 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=3e72c5dc-d05e-4dc8-b670-979ce8427230] 2025-05-19 18:49:38.356525 | orchestrator | 18:49:38.356 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-19 18:49:38.357760 | orchestrator | 18:49:38.357 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-19 18:49:38.369134 | orchestrator | 18:49:38.368 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-19 18:49:38.371353 | orchestrator | 18:49:38.371 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-19 18:49:38.376434 | orchestrator | 18:49:38.376 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-19 18:49:38.377668 | orchestrator | 18:49:38.377 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-19 18:49:48.356899 | orchestrator | 18:49:48.356 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-19 18:49:48.357793 | orchestrator | 18:49:48.357 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-19 18:49:48.370161 | orchestrator | 18:49:48.369 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-19 18:49:48.372287 | orchestrator | 18:49:48.372 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-19 18:49:48.377681 | orchestrator | 18:49:48.377 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-19 18:49:48.378792 | orchestrator | 18:49:48.378 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-19 18:49:58.357109 | orchestrator | 18:49:58.356 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-19 18:49:58.358041 | orchestrator | 18:49:58.357 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-19 18:49:58.370843 | orchestrator | 18:49:58.370 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-19 18:49:58.372973 | orchestrator | 18:49:58.372 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-19 18:49:58.378715 | orchestrator | 18:49:58.378 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-19 18:49:58.379713 | orchestrator | 18:49:58.379 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-19 18:49:58.831481 | orchestrator | 18:49:58.831 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=6dc44a1f-3c61-405f-903e-deae693e311a] 2025-05-19 18:49:58.915954 | orchestrator | 18:49:58.915 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=fcf44d9f-98fa-4227-96e4-a26b4ca91047] 2025-05-19 18:49:58.975382 | orchestrator | 18:49:58.974 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=bcc790fa-5f8a-483f-b584-2cf605a4169b] 2025-05-19 18:49:59.046540 | orchestrator | 18:49:59.046 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=0b06662e-56b9-42c1-a800-45ec91d8f64d] 2025-05-19 18:50:08.358398 | orchestrator | 18:50:08.358 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-05-19 18:50:08.373911 | orchestrator | 18:50:08.373 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-05-19 18:50:08.890876 | orchestrator | 18:50:08.890 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=0e71d539-a67c-43a1-b096-6e668bda8971] 2025-05-19 18:50:08.907700 | orchestrator | 18:50:08.907 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=154b89a5-8e3a-4db3-a91a-c89554810c85] 2025-05-19 18:50:08.919791 | orchestrator | 18:50:08.919 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-19 18:50:08.925668 | orchestrator | 18:50:08.925 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1173398183816669781] 2025-05-19 18:50:08.943563 | orchestrator | 18:50:08.943 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-19 18:50:08.945020 | orchestrator | 18:50:08.944 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-19 18:50:08.945656 | orchestrator | 18:50:08.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-19 18:50:08.945794 | orchestrator | 18:50:08.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-19 18:50:08.951160 | orchestrator | 18:50:08.951 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-19 18:50:08.951611 | orchestrator | 18:50:08.951 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-19 18:50:08.954550 | orchestrator | 18:50:08.954 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-19 18:50:08.958696 | orchestrator | 18:50:08.958 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-19 18:50:08.974716 | orchestrator | 18:50:08.974 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-19 18:50:08.974958 | orchestrator | 18:50:08.974 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-19 18:50:14.267368 | orchestrator | 18:50:14.266 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=0b06662e-56b9-42c1-a800-45ec91d8f64d/f14fc737-7fc7-4300-a12c-0d45556a294d] 2025-05-19 18:50:14.289140 | orchestrator | 18:50:14.288 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=0e71d539-a67c-43a1-b096-6e668bda8971/d327778e-2231-4334-9e4b-af08a803eb53] 2025-05-19 18:50:14.299790 | orchestrator | 18:50:14.299 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=fcf44d9f-98fa-4227-96e4-a26b4ca91047/cefbdaf0-1f4e-46ad-9d0a-02354cb171be] 2025-05-19 18:50:14.334719 | orchestrator | 18:50:14.334 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=fcf44d9f-98fa-4227-96e4-a26b4ca91047/61384220-7968-49f8-abf1-ef218bf9da20] 2025-05-19 18:50:14.361080 | orchestrator | 18:50:14.360 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=0b06662e-56b9-42c1-a800-45ec91d8f64d/75dd3d3f-610d-4410-ad7d-41af206bb5b3] 2025-05-19 18:50:14.387353 | orchestrator | 18:50:14.386 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=0e71d539-a67c-43a1-b096-6e668bda8971/ccb5460a-d35b-438c-9adb-1ec03f5b0ca2] 2025-05-19 18:50:14.403632 | orchestrator | 18:50:14.403 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=fcf44d9f-98fa-4227-96e4-a26b4ca91047/cc8857f4-0920-4071-aa29-561fcd5ac091] 2025-05-19 18:50:14.412275 | orchestrator | 18:50:14.411 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=0b06662e-56b9-42c1-a800-45ec91d8f64d/69146676-2ac4-45fa-96a7-ebd6f82ff2f3] 2025-05-19 18:50:14.581896 | orchestrator | 18:50:14.581 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=0e71d539-a67c-43a1-b096-6e668bda8971/4a1dc982-c7ec-4970-a1b2-e96be6dbc199] 2025-05-19 18:50:18.976671 | orchestrator | 18:50:18.976 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-19 18:50:28.977884 | orchestrator | 18:50:28.977 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-19 18:50:29.376224 | orchestrator | 18:50:29.376 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=e9d58f7a-e1ab-43d0-ab51-e442bf45b02c] 2025-05-19 18:50:29.400922 | orchestrator | 18:50:29.400 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-19 18:50:29.401022 | orchestrator | 18:50:29.400 STDOUT terraform: Outputs: 2025-05-19 18:50:29.401038 | orchestrator | 18:50:29.400 STDOUT terraform: manager_address = 2025-05-19 18:50:29.401050 | orchestrator | 18:50:29.400 STDOUT terraform: private_key = 2025-05-19 18:50:29.487742 | orchestrator | ok: Runtime: 0:01:46.113850 2025-05-19 18:50:29.510191 | 2025-05-19 18:50:29.510339 | TASK [Fetch manager address] 2025-05-19 18:50:29.976889 | orchestrator | ok 2025-05-19 18:50:29.987262 | 2025-05-19 18:50:29.987420 | TASK [Set manager_host address] 2025-05-19 18:50:30.079970 | orchestrator | ok 2025-05-19 18:50:30.091610 | 2025-05-19 18:50:30.091789 | LOOP [Update ansible collections] 2025-05-19 18:50:31.603144 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 18:50:31.603666 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-19 18:50:31.603744 | orchestrator | Starting galaxy collection install process 2025-05-19 18:50:31.603781 | orchestrator | Process install dependency map 2025-05-19 18:50:31.603813 | orchestrator | Starting collection install process 2025-05-19 18:50:31.603842 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-05-19 18:50:31.603878 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-05-19 18:50:31.603914 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-19 18:50:31.603979 | orchestrator | ok: Item: commons Runtime: 0:00:01.157797 2025-05-19 18:50:32.471116 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-19 18:50:32.471373 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 18:50:32.471474 | orchestrator | Starting galaxy collection install process 2025-05-19 18:50:32.471572 | orchestrator | Process install dependency map 2025-05-19 18:50:32.471647 | orchestrator | Starting collection install process 2025-05-19 18:50:32.471724 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-05-19 18:50:32.471795 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-05-19 18:50:32.471862 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-19 18:50:32.471942 | orchestrator | ok: Item: services Runtime: 0:00:00.604782 2025-05-19 18:50:32.490121 | 2025-05-19 18:50:32.490330 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-19 18:50:43.063971 | orchestrator | ok 2025-05-19 18:50:43.074035 | 2025-05-19 18:50:43.074156 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-19 18:51:43.121938 | orchestrator | ok 2025-05-19 18:51:43.131290 | 2025-05-19 18:51:43.131527 | TASK [Fetch manager ssh hostkey] 2025-05-19 18:51:44.723078 | orchestrator | Output suppressed because no_log was given 2025-05-19 18:51:44.739820 | 2025-05-19 18:51:44.740026 | TASK [Get ssh keypair from terraform environment] 2025-05-19 18:51:45.280002 | orchestrator | ok: Runtime: 0:00:00.008697 2025-05-19 18:51:45.296080 | 2025-05-19 18:51:45.296250 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-19 18:51:45.342781 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-19 18:51:45.352454 | 2025-05-19 18:51:45.352619 | TASK [Run manager part 0] 2025-05-19 18:51:46.569912 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 18:51:46.609546 | orchestrator | 2025-05-19 18:51:46.609598 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-19 18:51:46.609620 | orchestrator | 2025-05-19 18:51:46.609637 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-19 18:51:48.358227 | orchestrator | ok: [testbed-manager] 2025-05-19 18:51:48.358282 | orchestrator | 2025-05-19 18:51:48.358301 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-19 18:51:48.358309 | orchestrator | 2025-05-19 18:51:48.358318 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 18:51:50.294358 | orchestrator | ok: [testbed-manager] 2025-05-19 18:51:50.294584 | orchestrator | 2025-05-19 18:51:50.294615 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-19 18:51:51.000994 | orchestrator | ok: [testbed-manager] 2025-05-19 18:51:51.001080 | orchestrator | 2025-05-19 18:51:51.001099 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-19 18:51:51.061770 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.061827 | orchestrator | 2025-05-19 18:51:51.061838 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-19 18:51:51.099383 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.099454 | orchestrator | 2025-05-19 18:51:51.099463 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-19 18:51:51.129655 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.129708 | orchestrator | 2025-05-19 18:51:51.129715 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-19 18:51:51.156480 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.156528 | orchestrator | 2025-05-19 18:51:51.156533 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-19 18:51:51.196282 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.196341 | orchestrator | 2025-05-19 18:51:51.196352 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-19 18:51:51.235161 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.235213 | orchestrator | 2025-05-19 18:51:51.235221 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-19 18:51:51.263491 | orchestrator | skipping: [testbed-manager] 2025-05-19 18:51:51.263540 | orchestrator | 2025-05-19 18:51:51.263547 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-19 18:51:52.045700 | orchestrator | changed: [testbed-manager] 2025-05-19 18:51:52.045751 | orchestrator | 2025-05-19 18:51:52.045760 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-19 18:54:43.519457 | orchestrator | changed: [testbed-manager] 2025-05-19 18:54:43.519520 | orchestrator | 2025-05-19 18:54:43.519534 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-19 18:56:04.348059 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:04.348162 | orchestrator | 2025-05-19 18:56:04.348179 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-19 18:56:26.501344 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:26.501442 | orchestrator | 2025-05-19 18:56:26.501461 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-19 18:56:35.654788 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:35.654883 | orchestrator | 2025-05-19 18:56:35.654900 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-19 18:56:35.702048 | orchestrator | ok: [testbed-manager] 2025-05-19 18:56:35.702135 | orchestrator | 2025-05-19 18:56:35.702153 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-19 18:56:36.573154 | orchestrator | ok: [testbed-manager] 2025-05-19 18:56:36.573231 | orchestrator | 2025-05-19 18:56:36.573249 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-19 18:56:37.426004 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:37.426123 | orchestrator | 2025-05-19 18:56:37.426140 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-19 18:56:43.962381 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:43.962455 | orchestrator | 2025-05-19 18:56:43.962484 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-19 18:56:50.306092 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:50.306190 | orchestrator | 2025-05-19 18:56:50.306208 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-19 18:56:53.091221 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:53.091381 | orchestrator | 2025-05-19 18:56:53.091402 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-19 18:56:54.989369 | orchestrator | changed: [testbed-manager] 2025-05-19 18:56:54.989458 | orchestrator | 2025-05-19 18:56:54.989474 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-19 18:56:56.150633 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-19 18:56:56.150674 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-19 18:56:56.150679 | orchestrator | 2025-05-19 18:56:56.150685 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-19 18:56:56.233515 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-19 18:56:56.233559 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-19 18:56:56.233565 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-19 18:56:56.233569 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-19 18:57:00.229887 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-19 18:57:00.229976 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-19 18:57:00.229990 | orchestrator | 2025-05-19 18:57:00.230003 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-19 18:57:00.832780 | orchestrator | changed: [testbed-manager] 2025-05-19 18:57:00.832872 | orchestrator | 2025-05-19 18:57:00.832888 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-19 19:00:21.246556 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-19 19:00:21.246781 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-19 19:00:21.246802 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-19 19:00:21.246814 | orchestrator | 2025-05-19 19:00:21.246826 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-19 19:00:23.716850 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-19 19:00:23.716924 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-19 19:00:23.716940 | orchestrator | 2025-05-19 19:00:23.716955 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-19 19:00:23.716970 | orchestrator | 2025-05-19 19:00:23.716984 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:00:25.207342 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:25.207420 | orchestrator | 2025-05-19 19:00:25.207427 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-19 19:00:25.241388 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:25.241437 | orchestrator | 2025-05-19 19:00:25.241443 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-19 19:00:25.302509 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:25.302577 | orchestrator | 2025-05-19 19:00:25.302583 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-19 19:00:26.113147 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:26.113239 | orchestrator | 2025-05-19 19:00:26.113257 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-19 19:00:26.823593 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:26.823638 | orchestrator | 2025-05-19 19:00:26.823647 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-19 19:00:28.217056 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-19 19:00:28.217170 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-19 19:00:28.217186 | orchestrator | 2025-05-19 19:00:28.217220 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-19 19:00:29.655522 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:29.655601 | orchestrator | 2025-05-19 19:00:29.655608 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-19 19:00:31.415380 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:00:31.416562 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-19 19:00:31.416633 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:00:31.416647 | orchestrator | 2025-05-19 19:00:31.416660 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-19 19:00:31.971982 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:31.972078 | orchestrator | 2025-05-19 19:00:31.972094 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-19 19:00:32.042146 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:32.042220 | orchestrator | 2025-05-19 19:00:32.042234 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-19 19:00:32.846898 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:00:32.846958 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:32.846967 | orchestrator | 2025-05-19 19:00:32.846974 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-19 19:00:32.884784 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:32.884838 | orchestrator | 2025-05-19 19:00:32.884845 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-19 19:00:32.913889 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:32.913936 | orchestrator | 2025-05-19 19:00:32.913941 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-19 19:00:32.938293 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:32.938367 | orchestrator | 2025-05-19 19:00:32.938377 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-19 19:00:32.984166 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:32.984218 | orchestrator | 2025-05-19 19:00:32.984227 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-19 19:00:33.722697 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:33.722765 | orchestrator | 2025-05-19 19:00:33.722770 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-19 19:00:33.722776 | orchestrator | 2025-05-19 19:00:33.722782 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:00:35.174451 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:35.174532 | orchestrator | 2025-05-19 19:00:35.174548 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-19 19:00:36.162653 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:36.162741 | orchestrator | 2025-05-19 19:00:36.162757 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:00:36.162770 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-19 19:00:36.162782 | orchestrator | 2025-05-19 19:00:36.719868 | orchestrator | ok: Runtime: 0:08:50.616503 2025-05-19 19:00:36.737576 | 2025-05-19 19:00:36.737807 | TASK [Point out that the log in on the manager is now possible] 2025-05-19 19:00:36.788152 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-19 19:00:36.798047 | 2025-05-19 19:00:36.798288 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-19 19:00:36.835275 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-19 19:00:36.843131 | 2025-05-19 19:00:36.843277 | TASK [Run manager part 1 + 2] 2025-05-19 19:00:37.772770 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 19:00:37.828090 | orchestrator | 2025-05-19 19:00:37.828212 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-19 19:00:37.828233 | orchestrator | 2025-05-19 19:00:37.828266 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:00:40.776539 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:40.776714 | orchestrator | 2025-05-19 19:00:40.776773 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-19 19:00:40.821754 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:40.821841 | orchestrator | 2025-05-19 19:00:40.821862 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-19 19:00:40.862130 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:40.862211 | orchestrator | 2025-05-19 19:00:40.862227 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 19:00:40.899140 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:40.899210 | orchestrator | 2025-05-19 19:00:40.899230 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 19:00:40.961625 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:40.961715 | orchestrator | 2025-05-19 19:00:40.961734 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 19:00:41.018656 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:41.018738 | orchestrator | 2025-05-19 19:00:41.018755 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 19:00:41.060662 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-19 19:00:41.060738 | orchestrator | 2025-05-19 19:00:41.060753 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 19:00:41.792236 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:41.792355 | orchestrator | 2025-05-19 19:00:41.792376 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 19:00:41.840639 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:41.840723 | orchestrator | 2025-05-19 19:00:41.840738 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 19:00:43.218641 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:43.218747 | orchestrator | 2025-05-19 19:00:43.218767 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 19:00:43.816384 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:43.816477 | orchestrator | 2025-05-19 19:00:43.816492 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 19:00:44.979480 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:44.979571 | orchestrator | 2025-05-19 19:00:44.979590 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 19:00:58.136106 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:58.136161 | orchestrator | 2025-05-19 19:00:58.136169 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-19 19:00:58.822541 | orchestrator | ok: [testbed-manager] 2025-05-19 19:00:58.822583 | orchestrator | 2025-05-19 19:00:58.822590 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-19 19:00:58.875930 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:00:58.876592 | orchestrator | 2025-05-19 19:00:58.876620 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-19 19:00:59.877959 | orchestrator | changed: [testbed-manager] 2025-05-19 19:00:59.878083 | orchestrator | 2025-05-19 19:00:59.878100 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-19 19:01:00.887193 | orchestrator | changed: [testbed-manager] 2025-05-19 19:01:00.887248 | orchestrator | 2025-05-19 19:01:00.887260 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-19 19:01:01.505446 | orchestrator | changed: [testbed-manager] 2025-05-19 19:01:01.505538 | orchestrator | 2025-05-19 19:01:01.505554 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-19 19:01:01.545297 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-19 19:01:01.545427 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-19 19:01:01.545444 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-19 19:01:01.545457 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-19 19:01:03.709607 | orchestrator | changed: [testbed-manager] 2025-05-19 19:01:03.709708 | orchestrator | 2025-05-19 19:01:03.709725 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-19 19:01:12.603141 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-19 19:01:12.603252 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-19 19:01:12.603270 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-19 19:01:12.603283 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-19 19:01:12.603303 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-19 19:01:12.603314 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-19 19:01:12.603325 | orchestrator | 2025-05-19 19:01:12.603365 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-19 19:01:13.646258 | orchestrator | changed: [testbed-manager] 2025-05-19 19:01:13.646382 | orchestrator | 2025-05-19 19:01:13.646401 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-19 19:01:13.686726 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:01:13.686776 | orchestrator | 2025-05-19 19:01:13.686789 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-19 19:01:16.817700 | orchestrator | changed: [testbed-manager] 2025-05-19 19:01:16.817800 | orchestrator | 2025-05-19 19:01:16.817817 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-19 19:01:16.859968 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:01:16.860044 | orchestrator | 2025-05-19 19:01:16.860059 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-19 19:02:50.574968 | orchestrator | changed: [testbed-manager] 2025-05-19 19:02:50.575022 | orchestrator | 2025-05-19 19:02:50.575033 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 19:02:51.693552 | orchestrator | ok: [testbed-manager] 2025-05-19 19:02:51.693615 | orchestrator | 2025-05-19 19:02:51.693627 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:02:51.693638 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-19 19:02:51.693647 | orchestrator | 2025-05-19 19:02:51.961186 | orchestrator | ok: Runtime: 0:02:14.638043 2025-05-19 19:02:51.974072 | 2025-05-19 19:02:51.974295 | TASK [Reboot manager] 2025-05-19 19:02:53.511739 | orchestrator | ok: Runtime: 0:00:00.934556 2025-05-19 19:02:53.521992 | 2025-05-19 19:02:53.522118 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-19 19:03:07.618957 | orchestrator | ok 2025-05-19 19:03:07.630072 | 2025-05-19 19:03:07.630229 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-19 19:04:07.675566 | orchestrator | ok 2025-05-19 19:04:07.685046 | 2025-05-19 19:04:07.685238 | TASK [Deploy manager + bootstrap nodes] 2025-05-19 19:04:10.484409 | orchestrator | 2025-05-19 19:04:10.484648 | orchestrator | # DEPLOY MANAGER 2025-05-19 19:04:10.484671 | orchestrator | 2025-05-19 19:04:10.484685 | orchestrator | + set -e 2025-05-19 19:04:10.484697 | orchestrator | + echo 2025-05-19 19:04:10.484710 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-19 19:04:10.484727 | orchestrator | + echo 2025-05-19 19:04:10.484776 | orchestrator | + cat /opt/manager-vars.sh 2025-05-19 19:04:10.487145 | orchestrator | export NUMBER_OF_NODES=6 2025-05-19 19:04:10.487167 | orchestrator | 2025-05-19 19:04:10.487179 | orchestrator | export CEPH_VERSION=reef 2025-05-19 19:04:10.487191 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-19 19:04:10.487202 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-19 19:04:10.487223 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-19 19:04:10.487234 | orchestrator | 2025-05-19 19:04:10.487251 | orchestrator | export ARA=false 2025-05-19 19:04:10.487262 | orchestrator | export TEMPEST=false 2025-05-19 19:04:10.487279 | orchestrator | export IS_ZUUL=true 2025-05-19 19:04:10.487289 | orchestrator | 2025-05-19 19:04:10.487306 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:04:10.487324 | orchestrator | export EXTERNAL_API=false 2025-05-19 19:04:10.487334 | orchestrator | 2025-05-19 19:04:10.487355 | orchestrator | export IMAGE_USER=ubuntu 2025-05-19 19:04:10.487365 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-19 19:04:10.487375 | orchestrator | 2025-05-19 19:04:10.487387 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-19 19:04:10.487402 | orchestrator | 2025-05-19 19:04:10.487413 | orchestrator | + echo 2025-05-19 19:04:10.487423 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 19:04:10.488326 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 19:04:10.488349 | orchestrator | ++ INTERACTIVE=false 2025-05-19 19:04:10.488362 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 19:04:10.488374 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 19:04:10.488621 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 19:04:10.488635 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 19:04:10.488645 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 19:04:10.488655 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 19:04:10.488765 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 19:04:10.488779 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 19:04:10.488790 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 19:04:10.488800 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-19 19:04:10.488810 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-19 19:04:10.488820 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 19:04:10.488829 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 19:04:10.488846 | orchestrator | ++ export ARA=false 2025-05-19 19:04:10.488856 | orchestrator | ++ ARA=false 2025-05-19 19:04:10.488874 | orchestrator | ++ export TEMPEST=false 2025-05-19 19:04:10.488884 | orchestrator | ++ TEMPEST=false 2025-05-19 19:04:10.488894 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 19:04:10.488904 | orchestrator | ++ IS_ZUUL=true 2025-05-19 19:04:10.488914 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:04:10.488924 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:04:10.488938 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 19:04:10.488948 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 19:04:10.488957 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 19:04:10.488967 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 19:04:10.488977 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 19:04:10.488987 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 19:04:10.488997 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 19:04:10.489006 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 19:04:10.489017 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-19 19:04:10.547232 | orchestrator | + docker version 2025-05-19 19:04:10.823990 | orchestrator | Client: Docker Engine - Community 2025-05-19 19:04:10.824120 | orchestrator | Version: 26.1.4 2025-05-19 19:04:10.824143 | orchestrator | API version: 1.45 2025-05-19 19:04:10.824156 | orchestrator | Go version: go1.21.11 2025-05-19 19:04:10.824167 | orchestrator | Git commit: 5650f9b 2025-05-19 19:04:10.824179 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-19 19:04:10.824191 | orchestrator | OS/Arch: linux/amd64 2025-05-19 19:04:10.824204 | orchestrator | Context: default 2025-05-19 19:04:10.824215 | orchestrator | 2025-05-19 19:04:10.824227 | orchestrator | Server: Docker Engine - Community 2025-05-19 19:04:10.824238 | orchestrator | Engine: 2025-05-19 19:04:10.824250 | orchestrator | Version: 26.1.4 2025-05-19 19:04:10.824261 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-19 19:04:10.824272 | orchestrator | Go version: go1.21.11 2025-05-19 19:04:10.824283 | orchestrator | Git commit: de5c9cf 2025-05-19 19:04:10.824332 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-19 19:04:10.824343 | orchestrator | OS/Arch: linux/amd64 2025-05-19 19:04:10.824354 | orchestrator | Experimental: false 2025-05-19 19:04:10.824366 | orchestrator | containerd: 2025-05-19 19:04:10.824377 | orchestrator | Version: 1.7.27 2025-05-19 19:04:10.824388 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-19 19:04:10.824400 | orchestrator | runc: 2025-05-19 19:04:10.824411 | orchestrator | Version: 1.2.5 2025-05-19 19:04:10.824423 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-19 19:04:10.824434 | orchestrator | docker-init: 2025-05-19 19:04:10.824445 | orchestrator | Version: 0.19.0 2025-05-19 19:04:10.824456 | orchestrator | GitCommit: de40ad0 2025-05-19 19:04:10.825970 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-19 19:04:10.834372 | orchestrator | + set -e 2025-05-19 19:04:10.834418 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 19:04:10.834426 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 19:04:10.834433 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 19:04:10.834439 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 19:04:10.834446 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 19:04:10.834452 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 19:04:10.834461 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 19:04:10.834468 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-19 19:04:10.834475 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-19 19:04:10.834481 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 19:04:10.834487 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 19:04:10.834494 | orchestrator | ++ export ARA=false 2025-05-19 19:04:10.834500 | orchestrator | ++ ARA=false 2025-05-19 19:04:10.834507 | orchestrator | ++ export TEMPEST=false 2025-05-19 19:04:10.834514 | orchestrator | ++ TEMPEST=false 2025-05-19 19:04:10.834520 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 19:04:10.834526 | orchestrator | ++ IS_ZUUL=true 2025-05-19 19:04:10.834533 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:04:10.834540 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:04:10.834578 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 19:04:10.834586 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 19:04:10.834600 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 19:04:10.834606 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 19:04:10.834613 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 19:04:10.834619 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 19:04:10.834626 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 19:04:10.834632 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 19:04:10.834638 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 19:04:10.834645 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 19:04:10.834657 | orchestrator | ++ INTERACTIVE=false 2025-05-19 19:04:10.834663 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 19:04:10.834670 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 19:04:10.834718 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-19 19:04:10.834727 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-19 19:04:10.839430 | orchestrator | + set -e 2025-05-19 19:04:10.839469 | orchestrator | + VERSION=8.1.0 2025-05-19 19:04:10.839484 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-19 19:04:10.846680 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-19 19:04:10.846732 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-19 19:04:10.851104 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-19 19:04:10.855320 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-19 19:04:10.862948 | orchestrator | /opt/configuration ~ 2025-05-19 19:04:10.862965 | orchestrator | + set -e 2025-05-19 19:04:10.862974 | orchestrator | + pushd /opt/configuration 2025-05-19 19:04:10.862981 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 19:04:10.866323 | orchestrator | + source /opt/venv/bin/activate 2025-05-19 19:04:10.867084 | orchestrator | ++ deactivate nondestructive 2025-05-19 19:04:10.867095 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:10.867103 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:10.867110 | orchestrator | ++ hash -r 2025-05-19 19:04:10.867125 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:10.867133 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-19 19:04:10.867140 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-19 19:04:10.867333 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-19 19:04:10.867344 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-19 19:04:10.867371 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-19 19:04:10.867379 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-19 19:04:10.867385 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-19 19:04:10.867392 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 19:04:10.867418 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 19:04:10.867425 | orchestrator | ++ export PATH 2025-05-19 19:04:10.867431 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:10.867438 | orchestrator | ++ '[' -z '' ']' 2025-05-19 19:04:10.867444 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-19 19:04:10.867633 | orchestrator | ++ PS1='(venv) ' 2025-05-19 19:04:10.867643 | orchestrator | ++ export PS1 2025-05-19 19:04:10.867650 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-19 19:04:10.867656 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-19 19:04:10.867663 | orchestrator | ++ hash -r 2025-05-19 19:04:10.867680 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-19 19:04:11.893492 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-19 19:04:11.893684 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-19 19:04:11.895248 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-19 19:04:11.896542 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-19 19:04:11.897758 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-19 19:04:11.907447 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.0) 2025-05-19 19:04:11.908944 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-19 19:04:11.910146 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-19 19:04:11.911448 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-19 19:04:11.943322 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-19 19:04:11.944782 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-19 19:04:11.946457 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-19 19:04:11.947874 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-19 19:04:11.952232 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-19 19:04:12.158641 | orchestrator | ++ which gilt 2025-05-19 19:04:12.160956 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-19 19:04:12.160979 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-19 19:04:12.382164 | orchestrator | osism.cfg-generics: 2025-05-19 19:04:12.382279 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-19 19:04:13.978284 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-19 19:04:13.978431 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-19 19:04:13.978478 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-19 19:04:13.978495 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-19 19:04:15.033190 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-19 19:04:15.045014 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-19 19:04:15.393247 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-19 19:04:15.451407 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 19:04:15.451500 | orchestrator | + deactivate 2025-05-19 19:04:15.451513 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-19 19:04:15.451525 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 19:04:15.451534 | orchestrator | + export PATH 2025-05-19 19:04:15.451544 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-19 19:04:15.451606 | orchestrator | + '[' -n '' ']' 2025-05-19 19:04:15.451616 | orchestrator | + hash -r 2025-05-19 19:04:15.451625 | orchestrator | + '[' -n '' ']' 2025-05-19 19:04:15.451635 | orchestrator | + unset VIRTUAL_ENV 2025-05-19 19:04:15.451644 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-19 19:04:15.451653 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-19 19:04:15.451662 | orchestrator | + unset -f deactivate 2025-05-19 19:04:15.451671 | orchestrator | + popd 2025-05-19 19:04:15.451681 | orchestrator | ~ 2025-05-19 19:04:15.452403 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-19 19:04:15.452419 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-19 19:04:15.453497 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-19 19:04:15.505376 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-19 19:04:15.505466 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-19 19:04:15.505484 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-19 19:04:15.554179 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 19:04:15.554204 | orchestrator | + source /opt/venv/bin/activate 2025-05-19 19:04:15.554313 | orchestrator | ++ deactivate nondestructive 2025-05-19 19:04:15.554423 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:15.554438 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:15.554450 | orchestrator | ++ hash -r 2025-05-19 19:04:15.554844 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:15.554862 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-19 19:04:15.554873 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-19 19:04:15.554885 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-19 19:04:15.554900 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-19 19:04:15.554922 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-19 19:04:15.554933 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-19 19:04:15.555096 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-19 19:04:15.555112 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 19:04:15.555125 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 19:04:15.555136 | orchestrator | ++ export PATH 2025-05-19 19:04:15.555147 | orchestrator | ++ '[' -n '' ']' 2025-05-19 19:04:15.555196 | orchestrator | ++ '[' -z '' ']' 2025-05-19 19:04:15.555211 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-19 19:04:15.555222 | orchestrator | ++ PS1='(venv) ' 2025-05-19 19:04:15.555233 | orchestrator | ++ export PS1 2025-05-19 19:04:15.555377 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-19 19:04:15.555400 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-19 19:04:15.555411 | orchestrator | ++ hash -r 2025-05-19 19:04:15.555428 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-19 19:04:16.702768 | orchestrator | 2025-05-19 19:04:16.702923 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-19 19:04:16.702941 | orchestrator | 2025-05-19 19:04:16.702953 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 19:04:17.268647 | orchestrator | ok: [testbed-manager] 2025-05-19 19:04:17.268771 | orchestrator | 2025-05-19 19:04:17.268787 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-19 19:04:18.246264 | orchestrator | changed: [testbed-manager] 2025-05-19 19:04:18.246387 | orchestrator | 2025-05-19 19:04:18.246404 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-19 19:04:18.246418 | orchestrator | 2025-05-19 19:04:18.246429 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:04:20.554331 | orchestrator | ok: [testbed-manager] 2025-05-19 19:04:20.554470 | orchestrator | 2025-05-19 19:04:20.554487 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-19 19:04:25.884751 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-19 19:04:25.884888 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-19 19:04:25.884906 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-19 19:04:25.884918 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-19 19:04:25.884929 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-19 19:04:25.884946 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-19 19:04:25.884958 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-19 19:04:25.884972 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-19 19:04:25.884988 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-19 19:04:25.885008 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-19 19:04:25.885029 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-19 19:04:25.885047 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-19 19:04:25.885066 | orchestrator | 2025-05-19 19:04:25.885082 | orchestrator | TASK [Check status] ************************************************************ 2025-05-19 19:05:41.937312 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 19:05:41.937473 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-19 19:05:41.937492 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-19 19:05:41.937504 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-19 19:05:41.937531 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j194331158228.1594', 'results_file': '/home/dragon/.ansible_async/j194331158228.1594', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937552 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j386140733525.1619', 'results_file': '/home/dragon/.ansible_async/j386140733525.1619', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937569 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 19:05:41.937581 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j204318974635.1644', 'results_file': '/home/dragon/.ansible_async/j204318974635.1644', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937592 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j381312158638.1676', 'results_file': '/home/dragon/.ansible_async/j381312158638.1676', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937603 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 19:05:41.937666 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j536839453869.1708', 'results_file': '/home/dragon/.ansible_async/j536839453869.1708', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937678 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j563309697439.1741', 'results_file': '/home/dragon/.ansible_async/j563309697439.1741', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937689 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 19:05:41.937743 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j798080198665.1781', 'results_file': '/home/dragon/.ansible_async/j798080198665.1781', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937755 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j555633385988.1808', 'results_file': '/home/dragon/.ansible_async/j555633385988.1808', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937767 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j24282584835.1841', 'results_file': '/home/dragon/.ansible_async/j24282584835.1841', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937778 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j284818721573.1873', 'results_file': '/home/dragon/.ansible_async/j284818721573.1873', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937789 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j772815739922.1911', 'results_file': '/home/dragon/.ansible_async/j772815739922.1911', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937800 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j294587228192.1945', 'results_file': '/home/dragon/.ansible_async/j294587228192.1945', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-19 19:05:41.937812 | orchestrator | 2025-05-19 19:05:41.937825 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-19 19:05:41.992372 | orchestrator | ok: [testbed-manager] 2025-05-19 19:05:41.992481 | orchestrator | 2025-05-19 19:05:41.992501 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-19 19:05:42.600423 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:42.600549 | orchestrator | 2025-05-19 19:05:42.600565 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-19 19:05:42.950342 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:42.950456 | orchestrator | 2025-05-19 19:05:42.950473 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-19 19:05:43.313784 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:43.313871 | orchestrator | 2025-05-19 19:05:43.313881 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-19 19:05:43.372760 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:05:43.372821 | orchestrator | 2025-05-19 19:05:43.372831 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-19 19:05:43.711653 | orchestrator | ok: [testbed-manager] 2025-05-19 19:05:43.711761 | orchestrator | 2025-05-19 19:05:43.711777 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-19 19:05:43.824029 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:05:43.824134 | orchestrator | 2025-05-19 19:05:43.824148 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-19 19:05:43.824160 | orchestrator | 2025-05-19 19:05:43.824172 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:05:45.744263 | orchestrator | ok: [testbed-manager] 2025-05-19 19:05:45.744381 | orchestrator | 2025-05-19 19:05:45.744398 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-19 19:05:45.849442 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-19 19:05:45.849542 | orchestrator | 2025-05-19 19:05:45.849557 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-19 19:05:45.908824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-19 19:05:45.908965 | orchestrator | 2025-05-19 19:05:45.908981 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-19 19:05:47.059977 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-19 19:05:47.060099 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-19 19:05:47.060115 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-19 19:05:47.060128 | orchestrator | 2025-05-19 19:05:47.060141 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-19 19:05:48.936408 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-19 19:05:48.936535 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-19 19:05:48.936551 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-19 19:05:48.936564 | orchestrator | 2025-05-19 19:05:48.936577 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-19 19:05:49.645461 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:05:49.645585 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:49.645604 | orchestrator | 2025-05-19 19:05:49.645680 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-19 19:05:50.290554 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:05:50.290707 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:50.290727 | orchestrator | 2025-05-19 19:05:50.290741 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-19 19:05:50.348657 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:05:50.348756 | orchestrator | 2025-05-19 19:05:50.348772 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-19 19:05:50.706307 | orchestrator | ok: [testbed-manager] 2025-05-19 19:05:50.706439 | orchestrator | 2025-05-19 19:05:50.706468 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-19 19:05:50.780340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-19 19:05:50.780441 | orchestrator | 2025-05-19 19:05:50.780457 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-19 19:05:51.844541 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:51.844702 | orchestrator | 2025-05-19 19:05:51.844723 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-19 19:05:52.687530 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:52.687701 | orchestrator | 2025-05-19 19:05:52.687720 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-19 19:05:55.959393 | orchestrator | changed: [testbed-manager] 2025-05-19 19:05:55.959514 | orchestrator | 2025-05-19 19:05:55.959532 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-19 19:05:56.097398 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-19 19:05:56.097490 | orchestrator | 2025-05-19 19:05:56.097505 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-19 19:05:56.169063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 19:05:56.169153 | orchestrator | 2025-05-19 19:05:56.169167 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-19 19:05:58.815321 | orchestrator | ok: [testbed-manager] 2025-05-19 19:05:58.815438 | orchestrator | 2025-05-19 19:05:58.815455 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-19 19:05:58.910515 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-19 19:05:58.910667 | orchestrator | 2025-05-19 19:05:58.910686 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-19 19:05:59.984171 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-19 19:05:59.984283 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-19 19:05:59.984299 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-19 19:05:59.984340 | orchestrator | 2025-05-19 19:05:59.984353 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-19 19:06:00.057564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-19 19:06:00.057667 | orchestrator | 2025-05-19 19:06:00.057683 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-19 19:06:00.682667 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-19 19:06:00.682776 | orchestrator | 2025-05-19 19:06:00.682792 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-19 19:06:01.322717 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:01.323491 | orchestrator | 2025-05-19 19:06:01.323526 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-19 19:06:01.964418 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:06:01.964523 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:01.964539 | orchestrator | 2025-05-19 19:06:01.964552 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-19 19:06:02.365132 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:02.365234 | orchestrator | 2025-05-19 19:06:02.365249 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-19 19:06:02.695347 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:02.695485 | orchestrator | 2025-05-19 19:06:02.695503 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-19 19:06:02.750275 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:02.750366 | orchestrator | 2025-05-19 19:06:02.750391 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-19 19:06:03.374069 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:03.374193 | orchestrator | 2025-05-19 19:06:03.374209 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-19 19:06:03.446263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-19 19:06:03.446357 | orchestrator | 2025-05-19 19:06:03.446372 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-19 19:06:04.217479 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-19 19:06:04.217601 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-19 19:06:04.217617 | orchestrator | 2025-05-19 19:06:04.217682 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-19 19:06:04.849428 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-19 19:06:04.849540 | orchestrator | 2025-05-19 19:06:04.849558 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-19 19:06:05.529879 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:05.529990 | orchestrator | 2025-05-19 19:06:05.530006 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-19 19:06:05.581793 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:05.581859 | orchestrator | 2025-05-19 19:06:05.581873 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-19 19:06:06.208354 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:06.208466 | orchestrator | 2025-05-19 19:06:06.208483 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-19 19:06:08.033446 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:06:08.033557 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:06:08.033572 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:06:08.033585 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:08.033600 | orchestrator | 2025-05-19 19:06:08.033613 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-19 19:06:13.906342 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-19 19:06:13.906446 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-19 19:06:13.906457 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-19 19:06:13.906465 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-19 19:06:13.906491 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-19 19:06:13.906498 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-19 19:06:13.906505 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-19 19:06:13.906525 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-19 19:06:13.906533 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-19 19:06:13.906540 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-19 19:06:13.906546 | orchestrator | 2025-05-19 19:06:13.906554 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-19 19:06:14.548987 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-19 19:06:14.549118 | orchestrator | 2025-05-19 19:06:14.549135 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-19 19:06:14.635344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-19 19:06:14.635420 | orchestrator | 2025-05-19 19:06:14.635434 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-19 19:06:15.332253 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:15.332375 | orchestrator | 2025-05-19 19:06:15.332391 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-19 19:06:15.948719 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:15.948840 | orchestrator | 2025-05-19 19:06:15.948859 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-19 19:06:16.656980 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:16.657103 | orchestrator | 2025-05-19 19:06:16.657119 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-19 19:06:19.173421 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:19.173550 | orchestrator | 2025-05-19 19:06:19.173568 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-19 19:06:20.118379 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:20.118479 | orchestrator | 2025-05-19 19:06:20.118489 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-19 19:06:42.245219 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-19 19:06:42.245363 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:42.245382 | orchestrator | 2025-05-19 19:06:42.245395 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-19 19:06:42.291002 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:42.291090 | orchestrator | 2025-05-19 19:06:42.291104 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-19 19:06:42.291117 | orchestrator | 2025-05-19 19:06:42.291129 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-19 19:06:42.330144 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:42.330181 | orchestrator | 2025-05-19 19:06:42.330194 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-19 19:06:42.389089 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-19 19:06:42.389164 | orchestrator | 2025-05-19 19:06:42.389176 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-19 19:06:43.163151 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:43.163276 | orchestrator | 2025-05-19 19:06:43.163292 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-19 19:06:43.231128 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:43.231270 | orchestrator | 2025-05-19 19:06:43.231287 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-19 19:06:43.276571 | orchestrator | ok: [testbed-manager] => { 2025-05-19 19:06:43.276646 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-19 19:06:43.276705 | orchestrator | } 2025-05-19 19:06:43.276718 | orchestrator | 2025-05-19 19:06:43.276730 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-19 19:06:43.901485 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:43.901644 | orchestrator | 2025-05-19 19:06:43.901698 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-19 19:06:44.725077 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:44.725204 | orchestrator | 2025-05-19 19:06:44.725220 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-19 19:06:44.794446 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:44.794574 | orchestrator | 2025-05-19 19:06:44.794590 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-19 19:06:44.844793 | orchestrator | ok: [testbed-manager] => { 2025-05-19 19:06:44.844892 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-19 19:06:44.844907 | orchestrator | } 2025-05-19 19:06:44.844919 | orchestrator | 2025-05-19 19:06:44.844932 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-19 19:06:44.890957 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:44.891045 | orchestrator | 2025-05-19 19:06:44.891059 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-19 19:06:44.944812 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:44.944865 | orchestrator | 2025-05-19 19:06:44.944878 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-19 19:06:45.002211 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:45.002307 | orchestrator | 2025-05-19 19:06:45.002326 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-19 19:06:45.060567 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:45.060742 | orchestrator | 2025-05-19 19:06:45.060770 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-19 19:06:45.118369 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:45.118470 | orchestrator | 2025-05-19 19:06:45.118484 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-19 19:06:45.222544 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:06:45.222637 | orchestrator | 2025-05-19 19:06:45.222671 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-19 19:06:46.323160 | orchestrator | changed: [testbed-manager] 2025-05-19 19:06:46.323293 | orchestrator | 2025-05-19 19:06:46.323311 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-19 19:06:46.392272 | orchestrator | ok: [testbed-manager] 2025-05-19 19:06:46.392321 | orchestrator | 2025-05-19 19:06:46.392334 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-19 19:07:46.443024 | orchestrator | Pausing for 60 seconds 2025-05-19 19:07:46.443156 | orchestrator | changed: [testbed-manager] 2025-05-19 19:07:46.443173 | orchestrator | 2025-05-19 19:07:46.443187 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-19 19:07:46.490262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-19 19:07:46.490347 | orchestrator | 2025-05-19 19:07:46.490362 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-19 19:11:57.874498 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-19 19:11:57.874648 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-19 19:11:57.874665 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-19 19:11:57.874677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-19 19:11:57.874689 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-19 19:11:57.874700 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-19 19:11:57.874711 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-19 19:11:57.874722 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-19 19:11:57.874733 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-19 19:11:57.874776 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-19 19:11:57.874787 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-19 19:11:57.874798 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-19 19:11:57.874809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-19 19:11:57.874820 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-19 19:11:57.874873 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-19 19:11:57.874888 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-19 19:11:57.874899 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-19 19:11:57.874910 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-19 19:11:57.874921 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-19 19:11:57.874931 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-19 19:11:57.874942 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-19 19:11:57.874953 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-19 19:11:57.874964 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-19 19:11:57.874974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-19 19:11:57.874986 | orchestrator | changed: [testbed-manager] 2025-05-19 19:11:57.874999 | orchestrator | 2025-05-19 19:11:57.875012 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-19 19:11:57.875023 | orchestrator | 2025-05-19 19:11:57.875034 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:11:59.844094 | orchestrator | ok: [testbed-manager] 2025-05-19 19:11:59.844225 | orchestrator | 2025-05-19 19:11:59.844242 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-19 19:11:59.960950 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-19 19:11:59.961070 | orchestrator | 2025-05-19 19:11:59.961086 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-19 19:12:00.021244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 19:12:00.021352 | orchestrator | 2025-05-19 19:12:00.021366 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-19 19:12:01.875918 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:01.876048 | orchestrator | 2025-05-19 19:12:01.876064 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-19 19:12:01.922797 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:01.922943 | orchestrator | 2025-05-19 19:12:01.922967 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-19 19:12:02.011346 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-19 19:12:02.011459 | orchestrator | 2025-05-19 19:12:02.011474 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-19 19:12:04.786777 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-19 19:12:04.786947 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-19 19:12:04.786964 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-19 19:12:04.786978 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-19 19:12:04.787027 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-19 19:12:04.787040 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-19 19:12:04.787051 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-19 19:12:04.787071 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-19 19:12:04.787091 | orchestrator | 2025-05-19 19:12:04.787114 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-19 19:12:05.424944 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:05.425067 | orchestrator | 2025-05-19 19:12:05.425082 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-19 19:12:05.504737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-19 19:12:05.504775 | orchestrator | 2025-05-19 19:12:05.504788 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-19 19:12:06.703777 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-19 19:12:06.703929 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-19 19:12:06.703944 | orchestrator | 2025-05-19 19:12:06.703957 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-19 19:12:07.331555 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:07.331670 | orchestrator | 2025-05-19 19:12:07.331684 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-19 19:12:07.396903 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:12:07.396956 | orchestrator | 2025-05-19 19:12:07.396970 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-19 19:12:07.456020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-19 19:12:07.456069 | orchestrator | 2025-05-19 19:12:07.456110 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-19 19:12:08.929628 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:12:08.929754 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:12:08.929768 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:08.929781 | orchestrator | 2025-05-19 19:12:08.929794 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-19 19:12:09.573730 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:09.573888 | orchestrator | 2025-05-19 19:12:09.573904 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-19 19:12:09.648584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-19 19:12:09.648641 | orchestrator | 2025-05-19 19:12:09.648654 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-19 19:12:10.887287 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:12:10.887410 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:12:10.887425 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:10.887439 | orchestrator | 2025-05-19 19:12:10.887452 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-19 19:12:11.498944 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:11.499067 | orchestrator | 2025-05-19 19:12:11.499081 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-19 19:12:11.599689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-19 19:12:11.599777 | orchestrator | 2025-05-19 19:12:11.599790 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-19 19:12:12.188708 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:12.188894 | orchestrator | 2025-05-19 19:12:12.188913 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-19 19:12:12.592697 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:12.592821 | orchestrator | 2025-05-19 19:12:12.592864 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-19 19:12:13.775601 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-19 19:12:13.775768 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-19 19:12:13.775785 | orchestrator | 2025-05-19 19:12:13.775798 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-19 19:12:14.515391 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:14.515522 | orchestrator | 2025-05-19 19:12:14.515541 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-19 19:12:14.934534 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:14.934655 | orchestrator | 2025-05-19 19:12:14.934670 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-19 19:12:15.285433 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:15.285564 | orchestrator | 2025-05-19 19:12:15.285581 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-19 19:12:15.329643 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:12:15.329714 | orchestrator | 2025-05-19 19:12:15.329728 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-19 19:12:15.399357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-19 19:12:15.399388 | orchestrator | 2025-05-19 19:12:15.399401 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-19 19:12:15.450571 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:15.450601 | orchestrator | 2025-05-19 19:12:15.450614 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-19 19:12:17.463520 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-19 19:12:17.463651 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-19 19:12:17.463667 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-19 19:12:17.463680 | orchestrator | 2025-05-19 19:12:17.463694 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-19 19:12:18.124870 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:18.124995 | orchestrator | 2025-05-19 19:12:18.125012 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-19 19:12:18.808328 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:18.808423 | orchestrator | 2025-05-19 19:12:18.808430 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-19 19:12:19.535702 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:19.535884 | orchestrator | 2025-05-19 19:12:19.535903 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-19 19:12:19.611719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-19 19:12:19.611816 | orchestrator | 2025-05-19 19:12:19.611864 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-19 19:12:19.655043 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:19.655125 | orchestrator | 2025-05-19 19:12:19.655139 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-19 19:12:20.384736 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-19 19:12:20.384921 | orchestrator | 2025-05-19 19:12:20.384939 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-19 19:12:20.476824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-19 19:12:20.476932 | orchestrator | 2025-05-19 19:12:20.476948 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-19 19:12:21.223504 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:21.223621 | orchestrator | 2025-05-19 19:12:21.223636 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-19 19:12:21.829170 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:21.829299 | orchestrator | 2025-05-19 19:12:21.829316 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-19 19:12:21.884225 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:12:21.884277 | orchestrator | 2025-05-19 19:12:21.884320 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-19 19:12:21.940656 | orchestrator | ok: [testbed-manager] 2025-05-19 19:12:21.940714 | orchestrator | 2025-05-19 19:12:21.940727 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-19 19:12:22.774920 | orchestrator | changed: [testbed-manager] 2025-05-19 19:12:22.775042 | orchestrator | 2025-05-19 19:12:22.775058 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-19 19:13:03.922418 | orchestrator | changed: [testbed-manager] 2025-05-19 19:13:03.922534 | orchestrator | 2025-05-19 19:13:03.922544 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-19 19:13:04.600256 | orchestrator | ok: [testbed-manager] 2025-05-19 19:13:04.600385 | orchestrator | 2025-05-19 19:13:04.600402 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-19 19:13:07.345536 | orchestrator | changed: [testbed-manager] 2025-05-19 19:13:07.345684 | orchestrator | 2025-05-19 19:13:07.345704 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-19 19:13:07.405887 | orchestrator | ok: [testbed-manager] 2025-05-19 19:13:07.406005 | orchestrator | 2025-05-19 19:13:07.406072 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-19 19:13:07.406087 | orchestrator | 2025-05-19 19:13:07.406099 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-19 19:13:07.454674 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:13:07.454826 | orchestrator | 2025-05-19 19:13:07.454842 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-19 19:14:07.519454 | orchestrator | Pausing for 60 seconds 2025-05-19 19:14:07.519597 | orchestrator | changed: [testbed-manager] 2025-05-19 19:14:07.519613 | orchestrator | 2025-05-19 19:14:07.519626 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-19 19:14:12.911731 | orchestrator | changed: [testbed-manager] 2025-05-19 19:14:12.911877 | orchestrator | 2025-05-19 19:14:12.911895 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-19 19:14:54.509394 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-19 19:14:54.509542 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-19 19:14:54.509579 | orchestrator | changed: [testbed-manager] 2025-05-19 19:14:54.509594 | orchestrator | 2025-05-19 19:14:54.509607 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-19 19:15:00.111412 | orchestrator | changed: [testbed-manager] 2025-05-19 19:15:00.111542 | orchestrator | 2025-05-19 19:15:00.111559 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-19 19:15:00.190497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-19 19:15:00.190597 | orchestrator | 2025-05-19 19:15:00.190613 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-19 19:15:00.190654 | orchestrator | 2025-05-19 19:15:00.190666 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-19 19:15:00.235928 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:00.236022 | orchestrator | 2025-05-19 19:15:00.236040 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:15:00.236055 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-19 19:15:00.236068 | orchestrator | 2025-05-19 19:15:00.351404 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 19:15:00.351506 | orchestrator | + deactivate 2025-05-19 19:15:00.351522 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-19 19:15:00.351536 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 19:15:00.351548 | orchestrator | + export PATH 2025-05-19 19:15:00.351559 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-19 19:15:00.351572 | orchestrator | + '[' -n '' ']' 2025-05-19 19:15:00.351584 | orchestrator | + hash -r 2025-05-19 19:15:00.351595 | orchestrator | + '[' -n '' ']' 2025-05-19 19:15:00.351606 | orchestrator | + unset VIRTUAL_ENV 2025-05-19 19:15:00.351683 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-19 19:15:00.351741 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-19 19:15:00.351754 | orchestrator | + unset -f deactivate 2025-05-19 19:15:00.351767 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-19 19:15:00.359094 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 19:15:00.359120 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-19 19:15:00.359131 | orchestrator | + local max_attempts=60 2025-05-19 19:15:00.359143 | orchestrator | + local name=ceph-ansible 2025-05-19 19:15:00.359154 | orchestrator | + local attempt_num=1 2025-05-19 19:15:00.360234 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-19 19:15:00.399571 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 19:15:00.399690 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-19 19:15:00.399704 | orchestrator | + local max_attempts=60 2025-05-19 19:15:00.399718 | orchestrator | + local name=kolla-ansible 2025-05-19 19:15:00.399729 | orchestrator | + local attempt_num=1 2025-05-19 19:15:00.400717 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-19 19:15:00.432600 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 19:15:00.432680 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-19 19:15:00.432695 | orchestrator | + local max_attempts=60 2025-05-19 19:15:00.432708 | orchestrator | + local name=osism-ansible 2025-05-19 19:15:00.432719 | orchestrator | + local attempt_num=1 2025-05-19 19:15:00.434072 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-19 19:15:00.471479 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 19:15:00.471551 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-19 19:15:00.471566 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-19 19:15:01.227790 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-19 19:15:01.291333 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 19:15:01.291432 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-19 19:15:01.291449 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-19 19:15:01.511522 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-19 19:15:01.511684 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511702 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511738 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-19 19:15:01.511753 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-19 19:15:01.511770 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511782 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511794 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511805 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-19 19:15:01.511817 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511849 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-19 19:15:01.511861 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511872 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511883 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-19 19:15:01.511895 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511906 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511917 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.511929 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-19 19:15:01.522950 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-19 19:15:01.680530 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-19 19:15:01.680664 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-19 19:15:01.680681 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-19 19:15:01.680694 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-19 19:15:01.680708 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-19 19:15:01.688367 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-19 19:15:01.731157 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-19 19:15:01.731239 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-19 19:15:01.734223 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-19 19:15:03.248892 | orchestrator | 2025-05-19 19:15:03 | INFO  | Task ddbe134e-d95e-42d8-8489-097f996046b3 (resolvconf) was prepared for execution. 2025-05-19 19:15:03.248994 | orchestrator | 2025-05-19 19:15:03 | INFO  | It takes a moment until task ddbe134e-d95e-42d8-8489-097f996046b3 (resolvconf) has been started and output is visible here. 2025-05-19 19:15:06.236980 | orchestrator | 2025-05-19 19:15:06.237119 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-19 19:15:06.237137 | orchestrator | 2025-05-19 19:15:06.237652 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:15:06.239195 | orchestrator | Monday 19 May 2025 19:15:06 +0000 (0:00:00.084) 0:00:00.084 ************ 2025-05-19 19:15:10.203080 | orchestrator | ok: [testbed-manager] 2025-05-19 19:15:10.203254 | orchestrator | 2025-05-19 19:15:10.205793 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-19 19:15:10.206600 | orchestrator | Monday 19 May 2025 19:15:10 +0000 (0:00:03.967) 0:00:04.051 ************ 2025-05-19 19:15:10.262706 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:10.263092 | orchestrator | 2025-05-19 19:15:10.264867 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-19 19:15:10.267530 | orchestrator | Monday 19 May 2025 19:15:10 +0000 (0:00:00.060) 0:00:04.112 ************ 2025-05-19 19:15:10.346484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-19 19:15:10.346575 | orchestrator | 2025-05-19 19:15:10.347298 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-19 19:15:10.347324 | orchestrator | Monday 19 May 2025 19:15:10 +0000 (0:00:00.084) 0:00:04.196 ************ 2025-05-19 19:15:10.426381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 19:15:10.428039 | orchestrator | 2025-05-19 19:15:10.431080 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-19 19:15:10.431110 | orchestrator | Monday 19 May 2025 19:15:10 +0000 (0:00:00.082) 0:00:04.279 ************ 2025-05-19 19:15:11.524345 | orchestrator | ok: [testbed-manager] 2025-05-19 19:15:11.524648 | orchestrator | 2025-05-19 19:15:11.525068 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-19 19:15:11.525330 | orchestrator | Monday 19 May 2025 19:15:11 +0000 (0:00:01.096) 0:00:05.375 ************ 2025-05-19 19:15:11.586834 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:11.587054 | orchestrator | 2025-05-19 19:15:11.587549 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-19 19:15:11.588630 | orchestrator | Monday 19 May 2025 19:15:11 +0000 (0:00:00.063) 0:00:05.439 ************ 2025-05-19 19:15:12.043138 | orchestrator | ok: [testbed-manager] 2025-05-19 19:15:12.043256 | orchestrator | 2025-05-19 19:15:12.044076 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-19 19:15:12.045053 | orchestrator | Monday 19 May 2025 19:15:12 +0000 (0:00:00.454) 0:00:05.893 ************ 2025-05-19 19:15:12.123554 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:12.123689 | orchestrator | 2025-05-19 19:15:12.125105 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-19 19:15:12.127463 | orchestrator | Monday 19 May 2025 19:15:12 +0000 (0:00:00.080) 0:00:05.973 ************ 2025-05-19 19:15:12.677578 | orchestrator | changed: [testbed-manager] 2025-05-19 19:15:12.679643 | orchestrator | 2025-05-19 19:15:12.679774 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-19 19:15:12.680094 | orchestrator | Monday 19 May 2025 19:15:12 +0000 (0:00:00.551) 0:00:06.525 ************ 2025-05-19 19:15:13.790570 | orchestrator | changed: [testbed-manager] 2025-05-19 19:15:13.790737 | orchestrator | 2025-05-19 19:15:13.791187 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-19 19:15:13.791670 | orchestrator | Monday 19 May 2025 19:15:13 +0000 (0:00:01.114) 0:00:07.640 ************ 2025-05-19 19:15:14.698879 | orchestrator | ok: [testbed-manager] 2025-05-19 19:15:14.699112 | orchestrator | 2025-05-19 19:15:14.699488 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-19 19:15:14.700129 | orchestrator | Monday 19 May 2025 19:15:14 +0000 (0:00:00.908) 0:00:08.548 ************ 2025-05-19 19:15:14.767682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-19 19:15:14.767893 | orchestrator | 2025-05-19 19:15:14.768732 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-19 19:15:14.769457 | orchestrator | Monday 19 May 2025 19:15:14 +0000 (0:00:00.069) 0:00:08.618 ************ 2025-05-19 19:15:15.944214 | orchestrator | changed: [testbed-manager] 2025-05-19 19:15:15.944317 | orchestrator | 2025-05-19 19:15:15.945335 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:15:15.945591 | orchestrator | 2025-05-19 19:15:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:15:15.945746 | orchestrator | 2025-05-19 19:15:15 | INFO  | Please wait and do not abort execution. 2025-05-19 19:15:15.947050 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:15:15.947750 | orchestrator | 2025-05-19 19:15:15.948689 | orchestrator | Monday 19 May 2025 19:15:15 +0000 (0:00:01.174) 0:00:09.792 ************ 2025-05-19 19:15:15.949171 | orchestrator | =============================================================================== 2025-05-19 19:15:15.949936 | orchestrator | Gathering Facts --------------------------------------------------------- 3.97s 2025-05-19 19:15:15.950583 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.17s 2025-05-19 19:15:15.951072 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-05-19 19:15:15.951725 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2025-05-19 19:15:15.952240 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.91s 2025-05-19 19:15:15.953267 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-05-19 19:15:15.953747 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.45s 2025-05-19 19:15:15.954732 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-05-19 19:15:15.955314 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-05-19 19:15:15.955635 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-19 19:15:15.956373 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-05-19 19:15:15.956879 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-19 19:15:15.957307 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-19 19:15:16.302070 | orchestrator | + osism apply sshconfig 2025-05-19 19:15:17.715813 | orchestrator | 2025-05-19 19:15:17 | INFO  | Task c37452ba-ec6c-4675-88f3-194f0766f681 (sshconfig) was prepared for execution. 2025-05-19 19:15:17.715952 | orchestrator | 2025-05-19 19:15:17 | INFO  | It takes a moment until task c37452ba-ec6c-4675-88f3-194f0766f681 (sshconfig) has been started and output is visible here. 2025-05-19 19:15:20.694525 | orchestrator | 2025-05-19 19:15:20.694802 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-19 19:15:20.696589 | orchestrator | 2025-05-19 19:15:20.697629 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-19 19:15:20.698410 | orchestrator | Monday 19 May 2025 19:15:20 +0000 (0:00:00.111) 0:00:00.111 ************ 2025-05-19 19:15:21.238720 | orchestrator | ok: [testbed-manager] 2025-05-19 19:15:21.239750 | orchestrator | 2025-05-19 19:15:21.240518 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-19 19:15:21.241482 | orchestrator | Monday 19 May 2025 19:15:21 +0000 (0:00:00.544) 0:00:00.656 ************ 2025-05-19 19:15:21.729095 | orchestrator | changed: [testbed-manager] 2025-05-19 19:15:21.729762 | orchestrator | 2025-05-19 19:15:21.731324 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-19 19:15:21.731851 | orchestrator | Monday 19 May 2025 19:15:21 +0000 (0:00:00.489) 0:00:01.145 ************ 2025-05-19 19:15:27.344822 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-19 19:15:27.344961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-19 19:15:27.344979 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-19 19:15:27.345015 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-19 19:15:27.345027 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-19 19:15:27.345295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-19 19:15:27.346900 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-19 19:15:27.347555 | orchestrator | 2025-05-19 19:15:27.348216 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-19 19:15:27.349053 | orchestrator | Monday 19 May 2025 19:15:27 +0000 (0:00:05.614) 0:00:06.760 ************ 2025-05-19 19:15:27.425839 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:27.426471 | orchestrator | 2025-05-19 19:15:27.427401 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-19 19:15:27.428244 | orchestrator | Monday 19 May 2025 19:15:27 +0000 (0:00:00.084) 0:00:06.844 ************ 2025-05-19 19:15:27.970448 | orchestrator | changed: [testbed-manager] 2025-05-19 19:15:27.971108 | orchestrator | 2025-05-19 19:15:27.972762 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:15:27.972800 | orchestrator | 2025-05-19 19:15:27 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:15:27.972815 | orchestrator | 2025-05-19 19:15:27 | INFO  | Please wait and do not abort execution. 2025-05-19 19:15:27.973181 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:15:27.974249 | orchestrator | 2025-05-19 19:15:27.975015 | orchestrator | Monday 19 May 2025 19:15:27 +0000 (0:00:00.545) 0:00:07.389 ************ 2025-05-19 19:15:27.976239 | orchestrator | =============================================================================== 2025-05-19 19:15:27.977045 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.61s 2025-05-19 19:15:27.977824 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-05-19 19:15:27.978447 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2025-05-19 19:15:27.979070 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-05-19 19:15:27.979612 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-05-19 19:15:28.366898 | orchestrator | + osism apply known-hosts 2025-05-19 19:15:29.765115 | orchestrator | 2025-05-19 19:15:29 | INFO  | Task 699dadae-d5df-477e-a592-c9b11c4d37f1 (known-hosts) was prepared for execution. 2025-05-19 19:15:29.765217 | orchestrator | 2025-05-19 19:15:29 | INFO  | It takes a moment until task 699dadae-d5df-477e-a592-c9b11c4d37f1 (known-hosts) has been started and output is visible here. 2025-05-19 19:15:32.749783 | orchestrator | 2025-05-19 19:15:32.749899 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-19 19:15:32.750713 | orchestrator | 2025-05-19 19:15:32.752251 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-19 19:15:32.752747 | orchestrator | Monday 19 May 2025 19:15:32 +0000 (0:00:00.106) 0:00:00.106 ************ 2025-05-19 19:15:38.612772 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-19 19:15:38.612898 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-19 19:15:38.614395 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-19 19:15:38.614497 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-19 19:15:38.615199 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-19 19:15:38.618941 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-19 19:15:38.619399 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-19 19:15:38.620206 | orchestrator | 2025-05-19 19:15:38.620961 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-19 19:15:38.621647 | orchestrator | Monday 19 May 2025 19:15:38 +0000 (0:00:05.865) 0:00:05.971 ************ 2025-05-19 19:15:38.771271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-19 19:15:38.771548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-19 19:15:38.772341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-19 19:15:38.773150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-19 19:15:38.773938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-19 19:15:38.774751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-19 19:15:38.775247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-19 19:15:38.775895 | orchestrator | 2025-05-19 19:15:38.776696 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:38.777456 | orchestrator | Monday 19 May 2025 19:15:38 +0000 (0:00:00.159) 0:00:06.130 ************ 2025-05-19 19:15:39.958357 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeOXA+1N8YaOHTwPI/gKrOAVvFMq6zSOGlK+/caHSXgthpq7XHFK/YkQbUywlwo8bLadetLbALJsCmDe80eILNqXa4V+uXd4h6iEqLVEVqB1uND42AOdFJz/eHS80hYSLNzes9QcSJ7MFxZaMJ7gtBSfff5IjH+i0z2IurqfZnDefD8DD+AAXu4K1kmeDvktMBNd4kIuJRtiBFCDnHzxCPoyWsi+TewWyMFIL9/p3TyAABuiwKUKfjw+wGX5Waf8JN94uWq7AF73IUwJRYi9SF+m5HOzLqab4ooXgashfgWkKWY0ceUJIRtN1Fvdqv9S6hVm/i2ndP/RCxtU2RfByKkzTYcirM0X/pGcZY21kbVblWFuG9L5qMHffBn7+rA6njFDyWP1Hc8wL/E1MhxNIEaQPT6QXKko2OT/o/mvz61hVUtY6wZdNNnhO4X2i49OhC25CtWDSYOWbi3phCK0bAFisTQbv1L9hdwHl6shJRo3YAcBSAkp5s6xkYevORdOk=) 2025-05-19 19:15:39.958882 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJb5S13/DuhYVMbTG+xudyENxrDvae7bVVi5V2k5mu93UXeSaZ+aWQkqEaDdtIJjy9YYmorHPy0eXlNS5jSxuko=) 2025-05-19 19:15:39.959888 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKigN9VMkMNRKfTdfBqjSUFHOjY3pA36QttJn5/GH8J4) 2025-05-19 19:15:39.959995 | orchestrator | 2025-05-19 19:15:39.961031 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:39.961685 | orchestrator | Monday 19 May 2025 19:15:39 +0000 (0:00:01.185) 0:00:07.316 ************ 2025-05-19 19:15:40.956503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOLLx3MYpwc+QHSJNb0U9IkQDSixCw3G0KCRZ5OnY44f) 2025-05-19 19:15:40.957435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpF6GYEeQ1VnvcxnL5gBSjc7+hayFrFbFdjOtOXgEJeMWwZEws9kQXiDIc1kZ77ANlNyNiL3LHvIoVo0ENyLVfgR1Q8g6Vy/57hffD2rUz3IFn4AThH0OPXKwdVay79Ex45BFB0iNlkSz9OKBOLRY6tUHqE61bVv3eLgfJkYsp+ie1XU+U9myDTGd8T2zWk0501BIhyu5qWqjGtedcFSIOYdQ31gaEY3UbWAd/bxlUfzDOtCBlNqQ1NRmkID9JdIUtFwT/Nm90FpbeGkkLU6mkpK3L7cIAqrR4LrMzxK74XKBK7gqW703Ex7EKyUKHUXN1OwtSyATyjg82PdPaann0pQbpHNtV6CpE3rgaosFKsY3JQHdMeEjLUx3/PPuQDJxdGx4UuzFqWDYwJmGWeNxWr5hehUFBBZylrrdJEjSyBzKdGrXugaVMMiD7QNuLYCDVLQx8Sio2edc5yYxoIKdvGISoHRaHONwy2G4FP0znjIWVVbCE/xaLBQ00jdlrVvc=) 2025-05-19 19:15:40.958486 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOAwzJVUgZbugzOgp8c3NQbLEgMUW56UCSAQHkH6RMZdyOk+QhGr9us5uLyG+UQqcsJkFD4Gh+VxlZoIWILJ3s=) 2025-05-19 19:15:40.958717 | orchestrator | 2025-05-19 19:15:40.959392 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:40.960029 | orchestrator | Monday 19 May 2025 19:15:40 +0000 (0:00:00.998) 0:00:08.314 ************ 2025-05-19 19:15:41.971367 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2dEikNCAbTtWiQeBvRg6Y6LxtlxFyvA38OjEtTakCrJN4KzB9HC+aXZvxTg1c1fTQeMq2hBYWzt2EAWyeBLb4=) 2025-05-19 19:15:41.971867 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOW872LUZLbGn1zVyef8yQFUCVvJ3WM7TwRquOhu2sql) 2025-05-19 19:15:41.972727 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClls5VxR1nh6XugiMYbXKfOgMxL2/vYS89gNGnpGwTWUGWmu48yTNJFt57/ENUd5iaWvH9BOGkYJb9fzpigY/GwcadvIxuM6nDFiWElfnhTq46UyoXY49gR/6KWl2nYOONFiVFALB3R2RQnRXrphzweAw67WCNtSSZL2fHbXmyMIXjHUf0pW+micOBqFqMg9nvAVaJ6NPuC8q4OZQsmck1Ukm+5q7KkvC5ltvJEtOMmCjvPmIq+4Ax+93QjJts9wmA3TB670yBsd/znUOf/khW/M4en7sRoNmClG23ss+3cuchgiRjAVHaKDQ0DfyhBXPPutlmw5lSDwID5lsvjuAHYNCK95FSLTScVzVXXPpITBen2HKvhNE/W8vDn5iMxXznC6dWh8jSj4fJ7Zise7MeGfKykJW6Qztz4PqgsrLOHC4Aw+lxHnEFSXGjfpQ9Gt1T/E+r1UnKZ65IwOG240vcVXw8flSdlvZxDK8y+XjAaFCxj/ufYVyf4hn4yYqBd3U=) 2025-05-19 19:15:41.973358 | orchestrator | 2025-05-19 19:15:41.973840 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:41.974385 | orchestrator | Monday 19 May 2025 19:15:41 +0000 (0:00:01.014) 0:00:09.328 ************ 2025-05-19 19:15:43.037970 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB6RLLG/2ga+ePyvGZeMPemaBw/86qljHBSD5zB8cqq9) 2025-05-19 19:15:43.038141 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVDTQwH5ym1GYKN2usN4q13TnDj/ayzSu0I99iGs9eYAV/oxt5l8eMeQRks/H84Hv7H09cowHp6oj41kPowf+j7PnMv2pPpmuM8qC6lMgt45h2LS0yGtd4uMdxLcLU6h/ScGo+hcaV6uepaHy3tORNuToS7mabT5MNruuMd3Jw66L5e5KB1fycCKKmn8oDl05DQoaWjr/P5hmoqSNZYm57rSuUUAC03Ga1qct9wuq2HEhsfZH6R2p+6ex9NB45smx16KSsSmGv87NFoKAYv0Z9TCoa1Y5WEKFga73kWflhlqF+iaXXzypnSgnJRv+UpuFWzHWi9uKOJ0d5Gp5oWHtQUBwUxsJoxMFWcjPddLi19h7o7jSkqdKe7iq+MClVaH2+F8WSOQ1cF5CijZ0w3ckTXpaNhCYOXf8O3KIMXlG5ITlaax4NWNuAnnWNtHLb1iWE8rhwXJKUaCkzObqOYLgRosqrVB56dTNADbQkU18AivvehusO5S96j9HQC5GLsHk=) 2025-05-19 19:15:43.038182 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBApukai2oWtCo+m/vAdvrsxIvZpVcG092FQeXajFHyYQ3n/aZcYJ2Pf/6LSbdRZSo5yrDSc41jIlksB53vNZbjA=) 2025-05-19 19:15:43.038199 | orchestrator | 2025-05-19 19:15:43.038212 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:43.038225 | orchestrator | Monday 19 May 2025 19:15:43 +0000 (0:00:01.064) 0:00:10.392 ************ 2025-05-19 19:15:44.032965 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClA1/I39xCWwfB1y2Y25BQL4jJiAk9ZSEAmmTCURU/HLIFCRRuh6ZgQVgy5qlEvThvjAw/7fI8wKdN0JEru9Voe6pdVUv8XobFeHGX+3Fjd5/2Qk6qD46hYPpMRLrg5Ze1U7cA3BNSItl3Hdx168Ec1BNJ2+UMG17EWkzru7Nmz1ojImHyIttvd1dzYyLs1Rj1qkDHEoewH/8hvHdssjuDRbvIQ6EbVTH6biSU2+9id02my3MVQrjO2U4Dlk6XOYQUA9DimMJkeMD7Gq/Z4MuLSYagXfIsLaWSUAEkr73TqqQf3z04W6o5CgBaRCbNqrfiw8tuiGhm9PBwX85ET6YZCe+JqeLiLgMCZJzrEwbI0rZKkquXqKiE0or6s9BzDKXgmv+68N5QgobL2fQHCvVhZmnoXnAxg9h9Gtgfng3Ol/KaC34OW031OGQWqjjDIRtiDim8AlTX68jrXVULzSu8lLs6alEBmx9Pi56OCTEhVAnRv81dh97QYCY/9y8sX8M=) 2025-05-19 19:15:44.033122 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHM1oMsoU61nKA4mIpVJqiNORewE7hYf19fIfltgp/QH) 2025-05-19 19:15:44.033147 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI1/Jzd7veh4WFCLI3zkmqcHIx2iei31xMuCmo9v9ExbIkcbntYuOyQ6iQQO8k9JELGxAgGhTCePL50Cdpz0Ao0=) 2025-05-19 19:15:44.033206 | orchestrator | 2025-05-19 19:15:44.033228 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:44.033248 | orchestrator | Monday 19 May 2025 19:15:44 +0000 (0:00:00.995) 0:00:11.388 ************ 2025-05-19 19:15:45.093498 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuhqyyq1awKiQlcHPk0MGrX1AQxMZu69bw8sPSsiga/htS+0MipP7pkmVsP6ofiMOFnNcKOUMIzE7k5zsuP9zwuRi6VZ62eBI8uioWsE/g1yrsltIgbyAMS723A0oHRT2JRBuMKz2ODBe9A5GxG2YqW1voYvs3mMrsHbdTfXe5ji1DaJivtSvdiYuQWfGfnuEP9knALAYDJLpTR2DXJce1w4+43fR0Ib3/h8dEcCjFPri92J8chjKvo341XyzdL4xxqJ1v6SgA9UE24spJJqsDF7dvgHbaHLxzuyvt/aMK1pXE6O0EjlocFCYjwryzkaD0vmJHZEJ98eNfs9pLQxpx65V1G3VqDHe47NgxUPXQDzs1NTA8umT6aAAxTk7Sk+vtJ3F8uJTTVPByY4VypATBf3ivlj/lRIvkSJ6aQIH+oE+nnSpla8/DHAZxCZ/SVkHC93FHsJMZGTcSb1tHjm48FoddKL/DLHdTPg2QowwCq1xMBdGP0S2nWONFcuMkjrU=) 2025-05-19 19:15:45.093682 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNX2pXLrBzNI0106If8OiHUQhgb3TYNJRHi5xIB+DGMvkRn5icONC6JdnOU3jVogeldpW1ppNtmt9QNeeC4d7M=) 2025-05-19 19:15:45.093702 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICYaKbKbJW7g6zI4NhBI7P5yQJoT9U1ty1BJdOGZZ6Yg) 2025-05-19 19:15:45.093716 | orchestrator | 2025-05-19 19:15:45.093729 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:45.093972 | orchestrator | Monday 19 May 2025 19:15:45 +0000 (0:00:01.055) 0:00:12.444 ************ 2025-05-19 19:15:46.173879 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6vhHT9R/qB01Ek30D/5g6jQvCjcGN79Ao4zm+bKVcaOwqHzgc4B2f4vFWKV2xN2kbn9zFjFXWL4yNRvpWZV9Wvm2fxlb9vywctL6vR1COHZPYeey1/9IH3UyCHAH3ccrcY/6rGVnHM1OYERW8Eyq+chT9sUXJpLGMk48ZlmAfMRuDhCw2irikIAjN2dNUKgUo3WHMqcipQI5leorsgWNcRVfb82XjXqz3eqPL9u8pd1rbaYn7tgsLV96xfBxqoxcueNMDYdbCLnLI2i4C4Jdq8GJc06oUr4e8QpuLUaep6HDgx6BfHDNOEMDT7GDuMTW3D4sJxsORlyD6cDMwN3o8pi/sVjbdvH4hE8mfgQt8a6ZHpqipmKujEZy+E0BrrTcHH/FCvmQJzql83b9JZvnA/wI4ared10TaTKcGWzQW//lkPFhhpaeZeSi0v248Jt/NuKYe+hWe4tLH6lzkMy8KtRgswFnC3jIapa0jdwevcwq98+2GrYFUIzbGItNWJi8=) 2025-05-19 19:15:46.174004 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKaiUFE9up6C8IWmMEEmysvUGxeq6WJzDdPb7so+1mAaFspeRVc+vQYoJbekIAHOqm+1PMj3UDAhqVUQqRL2Zvg=) 2025-05-19 19:15:46.174082 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHVlnrXIY3rnaKko8DtAFOqgTIbSGExinSmruHz5/B21) 2025-05-19 19:15:46.174097 | orchestrator | 2025-05-19 19:15:46.174111 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-19 19:15:46.174124 | orchestrator | Monday 19 May 2025 19:15:46 +0000 (0:00:01.082) 0:00:13.527 ************ 2025-05-19 19:15:51.537925 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-19 19:15:51.538203 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-19 19:15:51.538235 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-19 19:15:51.538592 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-19 19:15:51.539540 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-19 19:15:51.539820 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-19 19:15:51.539830 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-19 19:15:51.541679 | orchestrator | 2025-05-19 19:15:51.541846 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-19 19:15:51.542205 | orchestrator | Monday 19 May 2025 19:15:51 +0000 (0:00:05.367) 0:00:18.894 ************ 2025-05-19 19:15:51.707377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-19 19:15:51.707628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-19 19:15:51.708444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-19 19:15:51.709504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-19 19:15:51.710533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-19 19:15:51.711377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-19 19:15:51.711656 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-19 19:15:51.712156 | orchestrator | 2025-05-19 19:15:51.712733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:51.713074 | orchestrator | Monday 19 May 2025 19:15:51 +0000 (0:00:00.172) 0:00:19.067 ************ 2025-05-19 19:15:52.799535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKigN9VMkMNRKfTdfBqjSUFHOjY3pA36QttJn5/GH8J4) 2025-05-19 19:15:52.800876 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeOXA+1N8YaOHTwPI/gKrOAVvFMq6zSOGlK+/caHSXgthpq7XHFK/YkQbUywlwo8bLadetLbALJsCmDe80eILNqXa4V+uXd4h6iEqLVEVqB1uND42AOdFJz/eHS80hYSLNzes9QcSJ7MFxZaMJ7gtBSfff5IjH+i0z2IurqfZnDefD8DD+AAXu4K1kmeDvktMBNd4kIuJRtiBFCDnHzxCPoyWsi+TewWyMFIL9/p3TyAABuiwKUKfjw+wGX5Waf8JN94uWq7AF73IUwJRYi9SF+m5HOzLqab4ooXgashfgWkKWY0ceUJIRtN1Fvdqv9S6hVm/i2ndP/RCxtU2RfByKkzTYcirM0X/pGcZY21kbVblWFuG9L5qMHffBn7+rA6njFDyWP1Hc8wL/E1MhxNIEaQPT6QXKko2OT/o/mvz61hVUtY6wZdNNnhO4X2i49OhC25CtWDSYOWbi3phCK0bAFisTQbv1L9hdwHl6shJRo3YAcBSAkp5s6xkYevORdOk=) 2025-05-19 19:15:52.801379 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJb5S13/DuhYVMbTG+xudyENxrDvae7bVVi5V2k5mu93UXeSaZ+aWQkqEaDdtIJjy9YYmorHPy0eXlNS5jSxuko=) 2025-05-19 19:15:52.802603 | orchestrator | 2025-05-19 19:15:52.803591 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:52.804000 | orchestrator | Monday 19 May 2025 19:15:52 +0000 (0:00:01.090) 0:00:20.158 ************ 2025-05-19 19:15:53.867210 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCpF6GYEeQ1VnvcxnL5gBSjc7+hayFrFbFdjOtOXgEJeMWwZEws9kQXiDIc1kZ77ANlNyNiL3LHvIoVo0ENyLVfgR1Q8g6Vy/57hffD2rUz3IFn4AThH0OPXKwdVay79Ex45BFB0iNlkSz9OKBOLRY6tUHqE61bVv3eLgfJkYsp+ie1XU+U9myDTGd8T2zWk0501BIhyu5qWqjGtedcFSIOYdQ31gaEY3UbWAd/bxlUfzDOtCBlNqQ1NRmkID9JdIUtFwT/Nm90FpbeGkkLU6mkpK3L7cIAqrR4LrMzxK74XKBK7gqW703Ex7EKyUKHUXN1OwtSyATyjg82PdPaann0pQbpHNtV6CpE3rgaosFKsY3JQHdMeEjLUx3/PPuQDJxdGx4UuzFqWDYwJmGWeNxWr5hehUFBBZylrrdJEjSyBzKdGrXugaVMMiD7QNuLYCDVLQx8Sio2edc5yYxoIKdvGISoHRaHONwy2G4FP0znjIWVVbCE/xaLBQ00jdlrVvc=) 2025-05-19 19:15:53.868073 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOOAwzJVUgZbugzOgp8c3NQbLEgMUW56UCSAQHkH6RMZdyOk+QhGr9us5uLyG+UQqcsJkFD4Gh+VxlZoIWILJ3s=) 2025-05-19 19:15:53.868946 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOLLx3MYpwc+QHSJNb0U9IkQDSixCw3G0KCRZ5OnY44f) 2025-05-19 19:15:53.869156 | orchestrator | 2025-05-19 19:15:53.870098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:53.870364 | orchestrator | Monday 19 May 2025 19:15:53 +0000 (0:00:01.067) 0:00:21.225 ************ 2025-05-19 19:15:54.968529 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClls5VxR1nh6XugiMYbXKfOgMxL2/vYS89gNGnpGwTWUGWmu48yTNJFt57/ENUd5iaWvH9BOGkYJb9fzpigY/GwcadvIxuM6nDFiWElfnhTq46UyoXY49gR/6KWl2nYOONFiVFALB3R2RQnRXrphzweAw67WCNtSSZL2fHbXmyMIXjHUf0pW+micOBqFqMg9nvAVaJ6NPuC8q4OZQsmck1Ukm+5q7KkvC5ltvJEtOMmCjvPmIq+4Ax+93QjJts9wmA3TB670yBsd/znUOf/khW/M4en7sRoNmClG23ss+3cuchgiRjAVHaKDQ0DfyhBXPPutlmw5lSDwID5lsvjuAHYNCK95FSLTScVzVXXPpITBen2HKvhNE/W8vDn5iMxXznC6dWh8jSj4fJ7Zise7MeGfKykJW6Qztz4PqgsrLOHC4Aw+lxHnEFSXGjfpQ9Gt1T/E+r1UnKZ65IwOG240vcVXw8flSdlvZxDK8y+XjAaFCxj/ufYVyf4hn4yYqBd3U=) 2025-05-19 19:15:54.968973 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2dEikNCAbTtWiQeBvRg6Y6LxtlxFyvA38OjEtTakCrJN4KzB9HC+aXZvxTg1c1fTQeMq2hBYWzt2EAWyeBLb4=) 2025-05-19 19:15:54.969006 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOW872LUZLbGn1zVyef8yQFUCVvJ3WM7TwRquOhu2sql) 2025-05-19 19:15:54.969184 | orchestrator | 2025-05-19 19:15:54.969950 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:54.970714 | orchestrator | Monday 19 May 2025 19:15:54 +0000 (0:00:01.101) 0:00:22.326 ************ 2025-05-19 19:15:56.057308 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB6RLLG/2ga+ePyvGZeMPemaBw/86qljHBSD5zB8cqq9) 2025-05-19 19:15:56.057436 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDVDTQwH5ym1GYKN2usN4q13TnDj/ayzSu0I99iGs9eYAV/oxt5l8eMeQRks/H84Hv7H09cowHp6oj41kPowf+j7PnMv2pPpmuM8qC6lMgt45h2LS0yGtd4uMdxLcLU6h/ScGo+hcaV6uepaHy3tORNuToS7mabT5MNruuMd3Jw66L5e5KB1fycCKKmn8oDl05DQoaWjr/P5hmoqSNZYm57rSuUUAC03Ga1qct9wuq2HEhsfZH6R2p+6ex9NB45smx16KSsSmGv87NFoKAYv0Z9TCoa1Y5WEKFga73kWflhlqF+iaXXzypnSgnJRv+UpuFWzHWi9uKOJ0d5Gp5oWHtQUBwUxsJoxMFWcjPddLi19h7o7jSkqdKe7iq+MClVaH2+F8WSOQ1cF5CijZ0w3ckTXpaNhCYOXf8O3KIMXlG5ITlaax4NWNuAnnWNtHLb1iWE8rhwXJKUaCkzObqOYLgRosqrVB56dTNADbQkU18AivvehusO5S96j9HQC5GLsHk=) 2025-05-19 19:15:56.057982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBApukai2oWtCo+m/vAdvrsxIvZpVcG092FQeXajFHyYQ3n/aZcYJ2Pf/6LSbdRZSo5yrDSc41jIlksB53vNZbjA=) 2025-05-19 19:15:56.059666 | orchestrator | 2025-05-19 19:15:56.060143 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:56.061092 | orchestrator | Monday 19 May 2025 19:15:56 +0000 (0:00:01.089) 0:00:23.415 ************ 2025-05-19 19:15:57.139509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHM1oMsoU61nKA4mIpVJqiNORewE7hYf19fIfltgp/QH) 2025-05-19 19:15:57.139690 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQClA1/I39xCWwfB1y2Y25BQL4jJiAk9ZSEAmmTCURU/HLIFCRRuh6ZgQVgy5qlEvThvjAw/7fI8wKdN0JEru9Voe6pdVUv8XobFeHGX+3Fjd5/2Qk6qD46hYPpMRLrg5Ze1U7cA3BNSItl3Hdx168Ec1BNJ2+UMG17EWkzru7Nmz1ojImHyIttvd1dzYyLs1Rj1qkDHEoewH/8hvHdssjuDRbvIQ6EbVTH6biSU2+9id02my3MVQrjO2U4Dlk6XOYQUA9DimMJkeMD7Gq/Z4MuLSYagXfIsLaWSUAEkr73TqqQf3z04W6o5CgBaRCbNqrfiw8tuiGhm9PBwX85ET6YZCe+JqeLiLgMCZJzrEwbI0rZKkquXqKiE0or6s9BzDKXgmv+68N5QgobL2fQHCvVhZmnoXnAxg9h9Gtgfng3Ol/KaC34OW031OGQWqjjDIRtiDim8AlTX68jrXVULzSu8lLs6alEBmx9Pi56OCTEhVAnRv81dh97QYCY/9y8sX8M=) 2025-05-19 19:15:57.139711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI1/Jzd7veh4WFCLI3zkmqcHIx2iei31xMuCmo9v9ExbIkcbntYuOyQ6iQQO8k9JELGxAgGhTCePL50Cdpz0Ao0=) 2025-05-19 19:15:57.139738 | orchestrator | 2025-05-19 19:15:57.140116 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:57.140859 | orchestrator | Monday 19 May 2025 19:15:57 +0000 (0:00:01.079) 0:00:24.494 ************ 2025-05-19 19:15:58.141128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDNX2pXLrBzNI0106If8OiHUQhgb3TYNJRHi5xIB+DGMvkRn5icONC6JdnOU3jVogeldpW1ppNtmt9QNeeC4d7M=) 2025-05-19 19:15:58.141332 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuhqyyq1awKiQlcHPk0MGrX1AQxMZu69bw8sPSsiga/htS+0MipP7pkmVsP6ofiMOFnNcKOUMIzE7k5zsuP9zwuRi6VZ62eBI8uioWsE/g1yrsltIgbyAMS723A0oHRT2JRBuMKz2ODBe9A5GxG2YqW1voYvs3mMrsHbdTfXe5ji1DaJivtSvdiYuQWfGfnuEP9knALAYDJLpTR2DXJce1w4+43fR0Ib3/h8dEcCjFPri92J8chjKvo341XyzdL4xxqJ1v6SgA9UE24spJJqsDF7dvgHbaHLxzuyvt/aMK1pXE6O0EjlocFCYjwryzkaD0vmJHZEJ98eNfs9pLQxpx65V1G3VqDHe47NgxUPXQDzs1NTA8umT6aAAxTk7Sk+vtJ3F8uJTTVPByY4VypATBf3ivlj/lRIvkSJ6aQIH+oE+nnSpla8/DHAZxCZ/SVkHC93FHsJMZGTcSb1tHjm48FoddKL/DLHdTPg2QowwCq1xMBdGP0S2nWONFcuMkjrU=) 2025-05-19 19:15:58.141458 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICYaKbKbJW7g6zI4NhBI7P5yQJoT9U1ty1BJdOGZZ6Yg) 2025-05-19 19:15:58.142183 | orchestrator | 2025-05-19 19:15:58.142486 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 19:15:58.144231 | orchestrator | Monday 19 May 2025 19:15:58 +0000 (0:00:01.005) 0:00:25.500 ************ 2025-05-19 19:15:59.146659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKaiUFE9up6C8IWmMEEmysvUGxeq6WJzDdPb7so+1mAaFspeRVc+vQYoJbekIAHOqm+1PMj3UDAhqVUQqRL2Zvg=) 2025-05-19 19:15:59.146903 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6vhHT9R/qB01Ek30D/5g6jQvCjcGN79Ao4zm+bKVcaOwqHzgc4B2f4vFWKV2xN2kbn9zFjFXWL4yNRvpWZV9Wvm2fxlb9vywctL6vR1COHZPYeey1/9IH3UyCHAH3ccrcY/6rGVnHM1OYERW8Eyq+chT9sUXJpLGMk48ZlmAfMRuDhCw2irikIAjN2dNUKgUo3WHMqcipQI5leorsgWNcRVfb82XjXqz3eqPL9u8pd1rbaYn7tgsLV96xfBxqoxcueNMDYdbCLnLI2i4C4Jdq8GJc06oUr4e8QpuLUaep6HDgx6BfHDNOEMDT7GDuMTW3D4sJxsORlyD6cDMwN3o8pi/sVjbdvH4hE8mfgQt8a6ZHpqipmKujEZy+E0BrrTcHH/FCvmQJzql83b9JZvnA/wI4ared10TaTKcGWzQW//lkPFhhpaeZeSi0v248Jt/NuKYe+hWe4tLH6lzkMy8KtRgswFnC3jIapa0jdwevcwq98+2GrYFUIzbGItNWJi8=) 2025-05-19 19:15:59.147196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHVlnrXIY3rnaKko8DtAFOqgTIbSGExinSmruHz5/B21) 2025-05-19 19:15:59.147857 | orchestrator | 2025-05-19 19:15:59.148000 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-19 19:15:59.148676 | orchestrator | Monday 19 May 2025 19:15:59 +0000 (0:00:01.004) 0:00:26.505 ************ 2025-05-19 19:15:59.319914 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-19 19:15:59.320049 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-19 19:15:59.320772 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-19 19:15:59.321101 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-19 19:15:59.321987 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 19:15:59.322529 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-19 19:15:59.322918 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-19 19:15:59.323672 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:59.323880 | orchestrator | 2025-05-19 19:15:59.324293 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-19 19:15:59.324805 | orchestrator | Monday 19 May 2025 19:15:59 +0000 (0:00:00.174) 0:00:26.680 ************ 2025-05-19 19:15:59.374854 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:59.375710 | orchestrator | 2025-05-19 19:15:59.376554 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-19 19:15:59.376973 | orchestrator | Monday 19 May 2025 19:15:59 +0000 (0:00:00.053) 0:00:26.733 ************ 2025-05-19 19:15:59.435802 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:15:59.436295 | orchestrator | 2025-05-19 19:15:59.437146 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-19 19:15:59.438363 | orchestrator | Monday 19 May 2025 19:15:59 +0000 (0:00:00.061) 0:00:26.795 ************ 2025-05-19 19:16:00.162688 | orchestrator | changed: [testbed-manager] 2025-05-19 19:16:00.163828 | orchestrator | 2025-05-19 19:16:00.165020 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:16:00.165984 | orchestrator | 2025-05-19 19:16:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:16:00.166011 | orchestrator | 2025-05-19 19:16:00 | INFO  | Please wait and do not abort execution. 2025-05-19 19:16:00.167416 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:16:00.168345 | orchestrator | 2025-05-19 19:16:00.169104 | orchestrator | Monday 19 May 2025 19:16:00 +0000 (0:00:00.724) 0:00:27.519 ************ 2025-05-19 19:16:00.169806 | orchestrator | =============================================================================== 2025-05-19 19:16:00.170481 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.87s 2025-05-19 19:16:00.171283 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.37s 2025-05-19 19:16:00.171944 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-05-19 19:16:00.173001 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-05-19 19:16:00.173236 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-19 19:16:00.173630 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-19 19:16:00.174096 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-19 19:16:00.174723 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-05-19 19:16:00.175190 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-19 19:16:00.176038 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-19 19:16:00.176177 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-19 19:16:00.176630 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-19 19:16:00.177065 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-19 19:16:00.177668 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-19 19:16:00.178109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-19 19:16:00.178496 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-19 19:16:00.178951 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.72s 2025-05-19 19:16:00.179249 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-19 19:16:00.179774 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-19 19:16:00.180187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-19 19:16:00.521171 | orchestrator | + osism apply squid 2025-05-19 19:16:01.953768 | orchestrator | 2025-05-19 19:16:01 | INFO  | Task f7e918e0-d96e-493d-9ac1-72a9cf8d557f (squid) was prepared for execution. 2025-05-19 19:16:01.953887 | orchestrator | 2025-05-19 19:16:01 | INFO  | It takes a moment until task f7e918e0-d96e-493d-9ac1-72a9cf8d557f (squid) has been started and output is visible here. 2025-05-19 19:16:04.915812 | orchestrator | 2025-05-19 19:16:04.916084 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-19 19:16:04.917106 | orchestrator | 2025-05-19 19:16:04.918289 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-19 19:16:04.918788 | orchestrator | Monday 19 May 2025 19:16:04 +0000 (0:00:00.102) 0:00:00.102 ************ 2025-05-19 19:16:05.006067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 19:16:05.007786 | orchestrator | 2025-05-19 19:16:05.009417 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-19 19:16:05.010502 | orchestrator | Monday 19 May 2025 19:16:04 +0000 (0:00:00.093) 0:00:00.195 ************ 2025-05-19 19:16:06.391459 | orchestrator | ok: [testbed-manager] 2025-05-19 19:16:06.391829 | orchestrator | 2025-05-19 19:16:06.392785 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-19 19:16:06.393376 | orchestrator | Monday 19 May 2025 19:16:06 +0000 (0:00:01.383) 0:00:01.579 ************ 2025-05-19 19:16:07.487714 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-19 19:16:07.488329 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-19 19:16:07.489082 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-19 19:16:07.489608 | orchestrator | 2025-05-19 19:16:07.490220 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-19 19:16:07.490868 | orchestrator | Monday 19 May 2025 19:16:07 +0000 (0:00:01.097) 0:00:02.677 ************ 2025-05-19 19:16:08.565072 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-19 19:16:08.565328 | orchestrator | 2025-05-19 19:16:08.565627 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-19 19:16:08.566139 | orchestrator | Monday 19 May 2025 19:16:08 +0000 (0:00:01.076) 0:00:03.754 ************ 2025-05-19 19:16:08.940642 | orchestrator | ok: [testbed-manager] 2025-05-19 19:16:08.940907 | orchestrator | 2025-05-19 19:16:08.942131 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-19 19:16:08.942892 | orchestrator | Monday 19 May 2025 19:16:08 +0000 (0:00:00.376) 0:00:04.130 ************ 2025-05-19 19:16:09.921728 | orchestrator | changed: [testbed-manager] 2025-05-19 19:16:09.921862 | orchestrator | 2025-05-19 19:16:09.922816 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-19 19:16:09.923096 | orchestrator | Monday 19 May 2025 19:16:09 +0000 (0:00:00.979) 0:00:05.109 ************ 2025-05-19 19:16:41.106421 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-19 19:16:41.106641 | orchestrator | ok: [testbed-manager] 2025-05-19 19:16:41.106667 | orchestrator | 2025-05-19 19:16:41.106686 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-19 19:16:41.106707 | orchestrator | Monday 19 May 2025 19:16:41 +0000 (0:00:31.182) 0:00:36.292 ************ 2025-05-19 19:16:53.537216 | orchestrator | changed: [testbed-manager] 2025-05-19 19:16:53.537346 | orchestrator | 2025-05-19 19:16:53.537365 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-19 19:16:53.537386 | orchestrator | Monday 19 May 2025 19:16:53 +0000 (0:00:12.427) 0:00:48.719 ************ 2025-05-19 19:17:53.615181 | orchestrator | Pausing for 60 seconds 2025-05-19 19:17:53.615326 | orchestrator | changed: [testbed-manager] 2025-05-19 19:17:53.615346 | orchestrator | 2025-05-19 19:17:53.615359 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-19 19:17:53.615373 | orchestrator | Monday 19 May 2025 19:17:53 +0000 (0:01:00.078) 0:01:48.798 ************ 2025-05-19 19:17:53.678971 | orchestrator | ok: [testbed-manager] 2025-05-19 19:17:53.679084 | orchestrator | 2025-05-19 19:17:53.679940 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-19 19:17:53.680485 | orchestrator | Monday 19 May 2025 19:17:53 +0000 (0:00:00.071) 0:01:48.869 ************ 2025-05-19 19:17:54.271509 | orchestrator | changed: [testbed-manager] 2025-05-19 19:17:54.271626 | orchestrator | 2025-05-19 19:17:54.273072 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:17:54.273450 | orchestrator | 2025-05-19 19:17:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:17:54.273475 | orchestrator | 2025-05-19 19:17:54 | INFO  | Please wait and do not abort execution. 2025-05-19 19:17:54.274707 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:17:54.275653 | orchestrator | 2025-05-19 19:17:54.276170 | orchestrator | Monday 19 May 2025 19:17:54 +0000 (0:00:00.591) 0:01:49.460 ************ 2025-05-19 19:17:54.276777 | orchestrator | =============================================================================== 2025-05-19 19:17:54.277531 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-19 19:17:54.278138 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.18s 2025-05-19 19:17:54.278531 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.43s 2025-05-19 19:17:54.278916 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.38s 2025-05-19 19:17:54.279390 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.10s 2025-05-19 19:17:54.279890 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.08s 2025-05-19 19:17:54.280382 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2025-05-19 19:17:54.280861 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2025-05-19 19:17:54.281453 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-05-19 19:17:54.281800 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-19 19:17:54.282225 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-05-19 19:17:54.634521 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-19 19:17:54.634646 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-19 19:17:54.639191 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-19 19:17:54.694934 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-19 19:17:54.695020 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-19 19:17:54.695036 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-19 19:17:54.698365 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-19 19:17:54.704008 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-19 19:17:54.709571 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-19 19:17:56.074093 | orchestrator | 2025-05-19 19:17:56 | INFO  | Task 3b22cddd-59e1-4834-8395-51d88a0b2370 (operator) was prepared for execution. 2025-05-19 19:17:56.074212 | orchestrator | 2025-05-19 19:17:56 | INFO  | It takes a moment until task 3b22cddd-59e1-4834-8395-51d88a0b2370 (operator) has been started and output is visible here. 2025-05-19 19:17:58.971979 | orchestrator | 2025-05-19 19:17:58.972221 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-19 19:17:58.982279 | orchestrator | 2025-05-19 19:17:58.982312 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 19:17:58.986591 | orchestrator | Monday 19 May 2025 19:17:58 +0000 (0:00:00.082) 0:00:00.083 ************ 2025-05-19 19:18:02.199979 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:18:02.200107 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:18:02.200123 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:02.202474 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:02.202549 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:18:02.203255 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:02.203670 | orchestrator | 2025-05-19 19:18:02.203949 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-19 19:18:02.206238 | orchestrator | Monday 19 May 2025 19:18:02 +0000 (0:00:03.229) 0:00:03.312 ************ 2025-05-19 19:18:02.949464 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:02.949612 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:02.949629 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:02.949847 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:18:02.951219 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:18:02.951431 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:18:02.951726 | orchestrator | 2025-05-19 19:18:02.955047 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-19 19:18:02.958763 | orchestrator | 2025-05-19 19:18:02.958804 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-19 19:18:02.959099 | orchestrator | Monday 19 May 2025 19:18:02 +0000 (0:00:00.749) 0:00:04.062 ************ 2025-05-19 19:18:03.037780 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:18:03.054570 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:18:03.108166 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:18:03.110690 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:03.113382 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:03.114463 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:03.116620 | orchestrator | 2025-05-19 19:18:03.117824 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-19 19:18:03.119108 | orchestrator | Monday 19 May 2025 19:18:03 +0000 (0:00:00.157) 0:00:04.220 ************ 2025-05-19 19:18:03.174951 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:18:03.196017 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:18:03.221314 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:18:03.255753 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:03.255800 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:03.257504 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:03.257772 | orchestrator | 2025-05-19 19:18:03.258747 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-19 19:18:03.259083 | orchestrator | Monday 19 May 2025 19:18:03 +0000 (0:00:00.149) 0:00:04.369 ************ 2025-05-19 19:18:03.844387 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:03.844987 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:03.845715 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:03.846500 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:03.847206 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:03.847714 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:03.848416 | orchestrator | 2025-05-19 19:18:03.849206 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-19 19:18:03.849658 | orchestrator | Monday 19 May 2025 19:18:03 +0000 (0:00:00.585) 0:00:04.955 ************ 2025-05-19 19:18:04.644137 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:04.644524 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:04.644987 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:04.647036 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:04.647903 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:04.648050 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:04.648654 | orchestrator | 2025-05-19 19:18:04.649286 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-19 19:18:04.649744 | orchestrator | Monday 19 May 2025 19:18:04 +0000 (0:00:00.799) 0:00:05.754 ************ 2025-05-19 19:18:05.759602 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-19 19:18:05.759986 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-19 19:18:05.761089 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-19 19:18:05.761445 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-19 19:18:05.762574 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-19 19:18:05.763290 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-19 19:18:05.763848 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-19 19:18:05.765667 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-19 19:18:05.766045 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-19 19:18:05.766873 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-19 19:18:05.767215 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-19 19:18:05.767856 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-19 19:18:05.771065 | orchestrator | 2025-05-19 19:18:05.771376 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-19 19:18:05.771839 | orchestrator | Monday 19 May 2025 19:18:05 +0000 (0:00:01.117) 0:00:06.871 ************ 2025-05-19 19:18:07.018487 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:07.018584 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:07.018595 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:07.019032 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:07.021158 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:07.021276 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:07.021654 | orchestrator | 2025-05-19 19:18:07.022074 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-19 19:18:07.022257 | orchestrator | Monday 19 May 2025 19:18:07 +0000 (0:00:01.257) 0:00:08.129 ************ 2025-05-19 19:18:08.176260 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-19 19:18:08.177567 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-19 19:18:08.178877 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-19 19:18:08.303946 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:18:08.304355 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:18:08.305335 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:18:08.306766 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:18:08.306879 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:18:08.311615 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 19:18:08.311841 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-19 19:18:08.312932 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-19 19:18:08.312952 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-19 19:18:08.313343 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-19 19:18:08.313666 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-19 19:18:08.316462 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-19 19:18:08.316485 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:18:08.316939 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:18:08.317358 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:18:08.320263 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:18:08.320485 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:18:08.320756 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-19 19:18:08.320902 | orchestrator | 2025-05-19 19:18:08.321470 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-19 19:18:08.321783 | orchestrator | Monday 19 May 2025 19:18:08 +0000 (0:00:01.286) 0:00:09.415 ************ 2025-05-19 19:18:08.911831 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:08.912798 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:08.915836 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:08.915862 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:08.916958 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:08.920250 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:08.920939 | orchestrator | 2025-05-19 19:18:08.921561 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-19 19:18:08.923026 | orchestrator | Monday 19 May 2025 19:18:08 +0000 (0:00:00.607) 0:00:10.023 ************ 2025-05-19 19:18:08.977797 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:18:08.999892 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:18:09.025886 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:18:09.080708 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:09.080786 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:09.080799 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:09.080811 | orchestrator | 2025-05-19 19:18:09.080825 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-19 19:18:09.080839 | orchestrator | Monday 19 May 2025 19:18:09 +0000 (0:00:00.167) 0:00:10.191 ************ 2025-05-19 19:18:09.765363 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-19 19:18:09.765605 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 19:18:09.767671 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:09.767733 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:09.768180 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-19 19:18:09.769889 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 19:18:09.769910 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 19:18:09.770084 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:09.770788 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:09.771302 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:09.771593 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 19:18:09.772127 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:09.772970 | orchestrator | 2025-05-19 19:18:09.773191 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-19 19:18:09.773991 | orchestrator | Monday 19 May 2025 19:18:09 +0000 (0:00:00.687) 0:00:10.878 ************ 2025-05-19 19:18:09.806424 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:18:09.822807 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:18:09.877239 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:18:09.901501 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:09.907911 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:09.907952 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:09.907965 | orchestrator | 2025-05-19 19:18:09.907978 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-19 19:18:09.907991 | orchestrator | Monday 19 May 2025 19:18:09 +0000 (0:00:00.136) 0:00:11.015 ************ 2025-05-19 19:18:09.942580 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:18:09.982903 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:18:10.002453 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:18:10.035459 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:10.036493 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:10.038313 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:10.039030 | orchestrator | 2025-05-19 19:18:10.039684 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-19 19:18:10.040221 | orchestrator | Monday 19 May 2025 19:18:10 +0000 (0:00:00.133) 0:00:11.149 ************ 2025-05-19 19:18:10.093493 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:18:10.112809 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:18:10.133982 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:18:10.176719 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:10.177047 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:10.177509 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:10.178012 | orchestrator | 2025-05-19 19:18:10.178481 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-19 19:18:10.179237 | orchestrator | Monday 19 May 2025 19:18:10 +0000 (0:00:00.141) 0:00:11.290 ************ 2025-05-19 19:18:10.828604 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:10.829210 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:10.829421 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:10.829864 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:10.831554 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:10.831723 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:10.831770 | orchestrator | 2025-05-19 19:18:10.832340 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-19 19:18:10.832537 | orchestrator | Monday 19 May 2025 19:18:10 +0000 (0:00:00.649) 0:00:11.940 ************ 2025-05-19 19:18:10.892173 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:18:10.933289 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:18:11.069062 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:18:11.069959 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:11.076944 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:11.076972 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:11.076984 | orchestrator | 2025-05-19 19:18:11.076997 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:18:11.077031 | orchestrator | 2025-05-19 19:18:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:18:11.077047 | orchestrator | 2025-05-19 19:18:11 | INFO  | Please wait and do not abort execution. 2025-05-19 19:18:11.081219 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:18:11.081242 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:18:11.081489 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:18:11.083883 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:18:11.084311 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:18:11.085561 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:18:11.086381 | orchestrator | 2025-05-19 19:18:11.088743 | orchestrator | Monday 19 May 2025 19:18:11 +0000 (0:00:00.234) 0:00:12.174 ************ 2025-05-19 19:18:11.088764 | orchestrator | =============================================================================== 2025-05-19 19:18:11.093578 | orchestrator | Gathering Facts --------------------------------------------------------- 3.23s 2025-05-19 19:18:11.093601 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-05-19 19:18:11.093613 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2025-05-19 19:18:11.093624 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-05-19 19:18:11.093840 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-05-19 19:18:11.094800 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2025-05-19 19:18:11.097160 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2025-05-19 19:18:11.097704 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-05-19 19:18:11.098386 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2025-05-19 19:18:11.100858 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-05-19 19:18:11.101875 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-05-19 19:18:11.104171 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-05-19 19:18:11.105341 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-05-19 19:18:11.107533 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2025-05-19 19:18:11.110112 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-05-19 19:18:11.110668 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-19 19:18:11.113010 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-05-19 19:18:11.511888 | orchestrator | + osism apply --environment custom facts 2025-05-19 19:18:12.866951 | orchestrator | 2025-05-19 19:18:12 | INFO  | Trying to run play facts in environment custom 2025-05-19 19:18:12.928730 | orchestrator | 2025-05-19 19:18:12 | INFO  | Task 9cf718e7-d49e-4668-be60-0275d8d398aa (facts) was prepared for execution. 2025-05-19 19:18:12.928833 | orchestrator | 2025-05-19 19:18:12 | INFO  | It takes a moment until task 9cf718e7-d49e-4668-be60-0275d8d398aa (facts) has been started and output is visible here. 2025-05-19 19:18:15.894871 | orchestrator | 2025-05-19 19:18:15.895646 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-19 19:18:15.896226 | orchestrator | 2025-05-19 19:18:15.896921 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 19:18:15.900683 | orchestrator | Monday 19 May 2025 19:18:15 +0000 (0:00:00.078) 0:00:00.078 ************ 2025-05-19 19:18:17.177346 | orchestrator | ok: [testbed-manager] 2025-05-19 19:18:18.288614 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:18.289189 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:18.290715 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:18.291587 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:18.292769 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:18.293695 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:18.294475 | orchestrator | 2025-05-19 19:18:18.295468 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-19 19:18:18.295757 | orchestrator | Monday 19 May 2025 19:18:18 +0000 (0:00:02.393) 0:00:02.472 ************ 2025-05-19 19:18:19.416633 | orchestrator | ok: [testbed-manager] 2025-05-19 19:18:20.244905 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:20.245021 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:18:20.245234 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:20.245650 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:20.245978 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:18:20.247963 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:18:20.247989 | orchestrator | 2025-05-19 19:18:20.248458 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-19 19:18:20.249180 | orchestrator | 2025-05-19 19:18:20.249429 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 19:18:20.249953 | orchestrator | Monday 19 May 2025 19:18:20 +0000 (0:00:01.955) 0:00:04.427 ************ 2025-05-19 19:18:20.346620 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:20.346834 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:20.347709 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:20.349433 | orchestrator | 2025-05-19 19:18:20.349716 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 19:18:20.350145 | orchestrator | Monday 19 May 2025 19:18:20 +0000 (0:00:00.104) 0:00:04.532 ************ 2025-05-19 19:18:20.463834 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:20.464013 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:20.464088 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:20.464578 | orchestrator | 2025-05-19 19:18:20.464771 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 19:18:20.465000 | orchestrator | Monday 19 May 2025 19:18:20 +0000 (0:00:00.117) 0:00:04.650 ************ 2025-05-19 19:18:20.583269 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:20.583529 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:20.584250 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:20.585075 | orchestrator | 2025-05-19 19:18:20.585570 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 19:18:20.586123 | orchestrator | Monday 19 May 2025 19:18:20 +0000 (0:00:00.116) 0:00:04.767 ************ 2025-05-19 19:18:20.713478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:18:20.713579 | orchestrator | 2025-05-19 19:18:20.713595 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 19:18:20.716918 | orchestrator | Monday 19 May 2025 19:18:20 +0000 (0:00:00.130) 0:00:04.897 ************ 2025-05-19 19:18:21.149861 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:21.152885 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:21.153285 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:21.154526 | orchestrator | 2025-05-19 19:18:21.155259 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 19:18:21.157712 | orchestrator | Monday 19 May 2025 19:18:21 +0000 (0:00:00.437) 0:00:05.335 ************ 2025-05-19 19:18:21.223555 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:21.224055 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:21.224672 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:21.225555 | orchestrator | 2025-05-19 19:18:21.227739 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 19:18:21.228775 | orchestrator | Monday 19 May 2025 19:18:21 +0000 (0:00:00.075) 0:00:05.410 ************ 2025-05-19 19:18:22.155908 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:22.156077 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:22.156968 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:22.157732 | orchestrator | 2025-05-19 19:18:22.158480 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 19:18:22.158785 | orchestrator | Monday 19 May 2025 19:18:22 +0000 (0:00:00.930) 0:00:06.341 ************ 2025-05-19 19:18:22.612673 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:22.612789 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:22.614632 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:22.614785 | orchestrator | 2025-05-19 19:18:22.614870 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 19:18:22.615150 | orchestrator | Monday 19 May 2025 19:18:22 +0000 (0:00:00.453) 0:00:06.795 ************ 2025-05-19 19:18:23.590467 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:23.591205 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:23.592788 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:23.592813 | orchestrator | 2025-05-19 19:18:23.593538 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 19:18:23.596030 | orchestrator | Monday 19 May 2025 19:18:23 +0000 (0:00:00.980) 0:00:07.775 ************ 2025-05-19 19:18:37.164801 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:37.164950 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:37.164966 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:37.164978 | orchestrator | 2025-05-19 19:18:37.164991 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-19 19:18:37.165004 | orchestrator | Monday 19 May 2025 19:18:37 +0000 (0:00:13.565) 0:00:21.341 ************ 2025-05-19 19:18:37.218452 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:18:37.252134 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:18:37.252253 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:18:37.252269 | orchestrator | 2025-05-19 19:18:37.252541 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-19 19:18:37.254138 | orchestrator | Monday 19 May 2025 19:18:37 +0000 (0:00:00.096) 0:00:21.438 ************ 2025-05-19 19:18:44.246884 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:18:44.247500 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:18:44.248935 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:18:44.250084 | orchestrator | 2025-05-19 19:18:44.250934 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 19:18:44.252014 | orchestrator | Monday 19 May 2025 19:18:44 +0000 (0:00:06.992) 0:00:28.430 ************ 2025-05-19 19:18:44.711920 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:44.712299 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:44.712963 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:44.713804 | orchestrator | 2025-05-19 19:18:44.715420 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-19 19:18:44.715819 | orchestrator | Monday 19 May 2025 19:18:44 +0000 (0:00:00.466) 0:00:28.897 ************ 2025-05-19 19:18:48.132510 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-19 19:18:48.132637 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-19 19:18:48.132653 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-19 19:18:48.133593 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-19 19:18:48.135114 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-19 19:18:48.136080 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-19 19:18:48.136842 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-19 19:18:48.137218 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-19 19:18:48.137938 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-19 19:18:48.138417 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-19 19:18:48.139068 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-19 19:18:48.140254 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-19 19:18:48.141008 | orchestrator | 2025-05-19 19:18:48.141960 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 19:18:48.142803 | orchestrator | Monday 19 May 2025 19:18:48 +0000 (0:00:03.416) 0:00:32.314 ************ 2025-05-19 19:18:49.253677 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:49.255654 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:49.256895 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:49.257408 | orchestrator | 2025-05-19 19:18:49.258352 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 19:18:49.259186 | orchestrator | 2025-05-19 19:18:49.259870 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 19:18:49.260315 | orchestrator | Monday 19 May 2025 19:18:49 +0000 (0:00:01.122) 0:00:33.437 ************ 2025-05-19 19:18:50.944275 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:18:54.111731 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:18:54.112690 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:18:54.113883 | orchestrator | ok: [testbed-manager] 2025-05-19 19:18:54.115214 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:54.115808 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:54.116286 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:54.117286 | orchestrator | 2025-05-19 19:18:54.117696 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:18:54.118242 | orchestrator | 2025-05-19 19:18:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:18:54.118272 | orchestrator | 2025-05-19 19:18:54 | INFO  | Please wait and do not abort execution. 2025-05-19 19:18:54.119176 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:18:54.120485 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:18:54.121068 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:18:54.121576 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:18:54.122219 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:18:54.123074 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:18:54.123457 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:18:54.124135 | orchestrator | 2025-05-19 19:18:54.124504 | orchestrator | Monday 19 May 2025 19:18:54 +0000 (0:00:04.859) 0:00:38.296 ************ 2025-05-19 19:18:54.125126 | orchestrator | =============================================================================== 2025-05-19 19:18:54.125582 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.57s 2025-05-19 19:18:54.126104 | orchestrator | Install required packages (Debian) -------------------------------------- 6.99s 2025-05-19 19:18:54.128399 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2025-05-19 19:18:54.129225 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-05-19 19:18:54.130105 | orchestrator | Create custom facts directory ------------------------------------------- 2.39s 2025-05-19 19:18:54.130603 | orchestrator | Copy fact file ---------------------------------------------------------- 1.96s 2025-05-19 19:18:54.131235 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.12s 2025-05-19 19:18:54.131255 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.98s 2025-05-19 19:18:54.131676 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.93s 2025-05-19 19:18:54.132998 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2025-05-19 19:18:54.133629 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-19 19:18:54.133982 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-05-19 19:18:54.134441 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-19 19:18:54.135569 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.12s 2025-05-19 19:18:54.135986 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.12s 2025-05-19 19:18:54.137158 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2025-05-19 19:18:54.137178 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2025-05-19 19:18:54.137190 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.08s 2025-05-19 19:18:54.535581 | orchestrator | + osism apply bootstrap 2025-05-19 19:18:56.013658 | orchestrator | 2025-05-19 19:18:56 | INFO  | Task 89d53f90-51ee-49e7-b6b4-3e33846de5f6 (bootstrap) was prepared for execution. 2025-05-19 19:18:56.013767 | orchestrator | 2025-05-19 19:18:56 | INFO  | It takes a moment until task 89d53f90-51ee-49e7-b6b4-3e33846de5f6 (bootstrap) has been started and output is visible here. 2025-05-19 19:18:59.092472 | orchestrator | 2025-05-19 19:18:59.093318 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-19 19:18:59.093525 | orchestrator | 2025-05-19 19:18:59.094951 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-19 19:18:59.095502 | orchestrator | Monday 19 May 2025 19:18:59 +0000 (0:00:00.105) 0:00:00.105 ************ 2025-05-19 19:18:59.177167 | orchestrator | ok: [testbed-manager] 2025-05-19 19:18:59.194216 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:18:59.224515 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:18:59.244838 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:18:59.317253 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:18:59.318006 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:18:59.318905 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:18:59.319689 | orchestrator | 2025-05-19 19:18:59.320388 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 19:18:59.321290 | orchestrator | 2025-05-19 19:18:59.322112 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 19:18:59.322684 | orchestrator | Monday 19 May 2025 19:18:59 +0000 (0:00:00.228) 0:00:00.334 ************ 2025-05-19 19:19:02.821073 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:02.821769 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:02.822323 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:02.823249 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:02.824008 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:02.824477 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:02.826424 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:02.827269 | orchestrator | 2025-05-19 19:19:02.827963 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-19 19:19:02.828464 | orchestrator | 2025-05-19 19:19:02.828857 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 19:19:02.829450 | orchestrator | Monday 19 May 2025 19:19:02 +0000 (0:00:03.503) 0:00:03.837 ************ 2025-05-19 19:19:02.929947 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-19 19:19:02.930138 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-19 19:19:02.933211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-19 19:19:02.937116 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-19 19:19:02.938175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:19:02.973095 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-19 19:19:02.973200 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:19:02.973291 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 19:19:02.973485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-19 19:19:02.974069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:19:03.024313 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-19 19:19:03.024458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:19:03.024482 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-19 19:19:03.024503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:19:03.024515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:19:03.024530 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-19 19:19:03.024549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:19:03.024568 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:19:03.024587 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:19:03.024607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:19:03.269871 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:03.269988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-19 19:19:03.270127 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:03.270417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:19:03.270676 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:19:03.271194 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:19:03.272128 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:19:03.273055 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-19 19:19:03.274128 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:03.274467 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:19:03.275764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:19:03.277320 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 19:19:03.277342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:19:03.279680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:19:03.279774 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 19:19:03.281170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:19:03.281192 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-19 19:19:03.282000 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:19:03.282926 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 19:19:03.283534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:19:03.283979 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 19:19:03.284545 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:19:03.284934 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 19:19:03.285478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:19:03.285975 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 19:19:03.286849 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:03.287772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:19:03.288214 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:03.288998 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 19:19:03.289333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 19:19:03.290007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 19:19:03.290322 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:03.290704 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 19:19:03.291020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 19:19:03.291659 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 19:19:03.291862 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:03.292198 | orchestrator | 2025-05-19 19:19:03.292591 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-19 19:19:03.292891 | orchestrator | 2025-05-19 19:19:03.293322 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-19 19:19:03.293587 | orchestrator | Monday 19 May 2025 19:19:03 +0000 (0:00:00.447) 0:00:04.285 ************ 2025-05-19 19:19:03.345686 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:03.370315 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:03.393247 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:03.415260 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:03.461176 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:03.461330 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:03.461929 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:03.462326 | orchestrator | 2025-05-19 19:19:03.465907 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-19 19:19:03.465950 | orchestrator | Monday 19 May 2025 19:19:03 +0000 (0:00:00.193) 0:00:04.478 ************ 2025-05-19 19:19:04.646477 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:04.646923 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:04.649285 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:04.649926 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:04.650928 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:04.651802 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:04.652226 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:04.653503 | orchestrator | 2025-05-19 19:19:04.654444 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-19 19:19:04.655494 | orchestrator | Monday 19 May 2025 19:19:04 +0000 (0:00:01.184) 0:00:05.662 ************ 2025-05-19 19:19:05.819998 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:05.820835 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:05.821297 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:05.824784 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:05.824856 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:05.824870 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:05.825058 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:05.826083 | orchestrator | 2025-05-19 19:19:05.827020 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-19 19:19:05.827676 | orchestrator | Monday 19 May 2025 19:19:05 +0000 (0:00:01.172) 0:00:06.835 ************ 2025-05-19 19:19:06.080723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:06.080901 | orchestrator | 2025-05-19 19:19:06.085486 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-19 19:19:06.086468 | orchestrator | Monday 19 May 2025 19:19:06 +0000 (0:00:00.260) 0:00:07.095 ************ 2025-05-19 19:19:08.028436 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:08.030789 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:08.030830 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:08.030843 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:08.032447 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:08.034410 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:08.035426 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:08.035878 | orchestrator | 2025-05-19 19:19:08.036692 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-19 19:19:08.037386 | orchestrator | Monday 19 May 2025 19:19:08 +0000 (0:00:01.947) 0:00:09.043 ************ 2025-05-19 19:19:08.100167 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:08.268820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:08.269146 | orchestrator | 2025-05-19 19:19:08.269863 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-19 19:19:08.273685 | orchestrator | Monday 19 May 2025 19:19:08 +0000 (0:00:00.241) 0:00:09.284 ************ 2025-05-19 19:19:09.250795 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:09.250908 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:09.251434 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:09.251856 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:09.252325 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:09.252850 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:09.254944 | orchestrator | 2025-05-19 19:19:09.255768 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-19 19:19:09.256021 | orchestrator | Monday 19 May 2025 19:19:09 +0000 (0:00:00.981) 0:00:10.265 ************ 2025-05-19 19:19:09.305183 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:09.851237 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:09.851870 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:09.852416 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:09.852817 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:09.854178 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:09.854632 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:09.855075 | orchestrator | 2025-05-19 19:19:09.855530 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-19 19:19:09.855979 | orchestrator | Monday 19 May 2025 19:19:09 +0000 (0:00:00.600) 0:00:10.865 ************ 2025-05-19 19:19:09.944056 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:09.972242 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:09.994630 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:10.267051 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:10.267664 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:10.268453 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:10.269092 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:10.269951 | orchestrator | 2025-05-19 19:19:10.270327 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-19 19:19:10.270811 | orchestrator | Monday 19 May 2025 19:19:10 +0000 (0:00:00.414) 0:00:11.280 ************ 2025-05-19 19:19:10.336970 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:10.363159 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:10.382840 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:10.406272 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:10.463195 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:10.464815 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:10.465505 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:10.466316 | orchestrator | 2025-05-19 19:19:10.467056 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-19 19:19:10.467875 | orchestrator | Monday 19 May 2025 19:19:10 +0000 (0:00:00.196) 0:00:11.477 ************ 2025-05-19 19:19:10.743965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:10.744181 | orchestrator | 2025-05-19 19:19:10.745505 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-19 19:19:10.746316 | orchestrator | Monday 19 May 2025 19:19:10 +0000 (0:00:00.278) 0:00:11.755 ************ 2025-05-19 19:19:11.026857 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:11.027452 | orchestrator | 2025-05-19 19:19:11.028146 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-19 19:19:11.028853 | orchestrator | Monday 19 May 2025 19:19:11 +0000 (0:00:00.286) 0:00:12.042 ************ 2025-05-19 19:19:12.285185 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:12.285292 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:12.285556 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:12.288322 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:12.290293 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:12.292733 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:12.293031 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:12.293842 | orchestrator | 2025-05-19 19:19:12.295432 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-19 19:19:12.295618 | orchestrator | Monday 19 May 2025 19:19:12 +0000 (0:00:01.256) 0:00:13.299 ************ 2025-05-19 19:19:12.364575 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:12.386129 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:12.421833 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:12.442607 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:12.513040 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:12.513462 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:12.514532 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:12.515528 | orchestrator | 2025-05-19 19:19:12.516032 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-19 19:19:12.517306 | orchestrator | Monday 19 May 2025 19:19:12 +0000 (0:00:00.229) 0:00:13.528 ************ 2025-05-19 19:19:13.067346 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:13.067528 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:13.068539 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:13.070512 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:13.071636 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:13.071853 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:13.073204 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:13.073338 | orchestrator | 2025-05-19 19:19:13.074687 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-19 19:19:13.075315 | orchestrator | Monday 19 May 2025 19:19:13 +0000 (0:00:00.552) 0:00:14.081 ************ 2025-05-19 19:19:13.170666 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:13.194614 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:13.222708 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:13.250134 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:13.325063 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:13.325169 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:13.325802 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:13.326258 | orchestrator | 2025-05-19 19:19:13.329898 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-19 19:19:13.329947 | orchestrator | Monday 19 May 2025 19:19:13 +0000 (0:00:00.258) 0:00:14.340 ************ 2025-05-19 19:19:13.863668 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:13.864630 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:13.867888 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:13.867929 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:13.867941 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:13.868176 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:13.868983 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:13.869722 | orchestrator | 2025-05-19 19:19:13.870102 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-19 19:19:13.870915 | orchestrator | Monday 19 May 2025 19:19:13 +0000 (0:00:00.539) 0:00:14.880 ************ 2025-05-19 19:19:14.929983 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:14.931039 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:14.931134 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:14.931767 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:14.932792 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:14.933546 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:14.934070 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:14.934830 | orchestrator | 2025-05-19 19:19:14.935203 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-19 19:19:14.935902 | orchestrator | Monday 19 May 2025 19:19:14 +0000 (0:00:01.064) 0:00:15.944 ************ 2025-05-19 19:19:16.029684 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:16.030410 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:16.030727 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:16.031590 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:16.032471 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:16.033618 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:16.033914 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:16.034865 | orchestrator | 2025-05-19 19:19:16.035550 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-19 19:19:16.036056 | orchestrator | Monday 19 May 2025 19:19:16 +0000 (0:00:01.099) 0:00:17.044 ************ 2025-05-19 19:19:16.339936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:16.340129 | orchestrator | 2025-05-19 19:19:16.340241 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-19 19:19:16.340861 | orchestrator | Monday 19 May 2025 19:19:16 +0000 (0:00:00.309) 0:00:17.353 ************ 2025-05-19 19:19:16.422326 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:17.791924 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:17.792134 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:17.795115 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:17.795181 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:17.795192 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:17.795201 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:17.795210 | orchestrator | 2025-05-19 19:19:17.795344 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 19:19:17.795967 | orchestrator | Monday 19 May 2025 19:19:17 +0000 (0:00:01.452) 0:00:18.806 ************ 2025-05-19 19:19:17.863961 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:17.890252 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:17.913410 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:17.938676 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:18.006494 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:18.007445 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:18.008093 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:18.008859 | orchestrator | 2025-05-19 19:19:18.009699 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 19:19:18.010203 | orchestrator | Monday 19 May 2025 19:19:17 +0000 (0:00:00.216) 0:00:19.022 ************ 2025-05-19 19:19:18.105279 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:18.125983 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:18.151034 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:18.221475 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:18.221865 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:18.222380 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:18.222799 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:18.223480 | orchestrator | 2025-05-19 19:19:18.224063 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 19:19:18.224582 | orchestrator | Monday 19 May 2025 19:19:18 +0000 (0:00:00.215) 0:00:19.238 ************ 2025-05-19 19:19:18.296376 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:18.317391 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:18.346435 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:18.366433 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:18.422488 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:18.422749 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:18.423801 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:18.424611 | orchestrator | 2025-05-19 19:19:18.425372 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 19:19:18.426099 | orchestrator | Monday 19 May 2025 19:19:18 +0000 (0:00:00.200) 0:00:19.438 ************ 2025-05-19 19:19:18.708673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:18.709122 | orchestrator | 2025-05-19 19:19:18.710110 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 19:19:18.710897 | orchestrator | Monday 19 May 2025 19:19:18 +0000 (0:00:00.285) 0:00:19.724 ************ 2025-05-19 19:19:19.233074 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:19.233813 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:19.234472 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:19.234884 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:19.237945 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:19.237977 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:19.237990 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:19.238002 | orchestrator | 2025-05-19 19:19:19.238066 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 19:19:19.238385 | orchestrator | Monday 19 May 2025 19:19:19 +0000 (0:00:00.525) 0:00:20.249 ************ 2025-05-19 19:19:19.310828 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:19.335653 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:19.358567 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:19.451710 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:19.452785 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:19.454063 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:19.455532 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:19.456290 | orchestrator | 2025-05-19 19:19:19.457308 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 19:19:19.457755 | orchestrator | Monday 19 May 2025 19:19:19 +0000 (0:00:00.218) 0:00:20.467 ************ 2025-05-19 19:19:20.504705 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:20.505953 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:20.506703 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:20.508005 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:20.509027 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:20.509588 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:20.510569 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:20.511499 | orchestrator | 2025-05-19 19:19:20.512080 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 19:19:20.512749 | orchestrator | Monday 19 May 2025 19:19:20 +0000 (0:00:01.051) 0:00:21.519 ************ 2025-05-19 19:19:21.066733 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:21.066932 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:21.067993 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:21.068563 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:21.072001 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:21.072652 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:21.073899 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:21.074796 | orchestrator | 2025-05-19 19:19:21.075592 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 19:19:21.076669 | orchestrator | Monday 19 May 2025 19:19:21 +0000 (0:00:00.562) 0:00:22.082 ************ 2025-05-19 19:19:22.135730 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:22.135844 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:22.136811 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:22.137620 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:22.138270 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:22.139043 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:22.139740 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:22.140398 | orchestrator | 2025-05-19 19:19:22.140983 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 19:19:22.141378 | orchestrator | Monday 19 May 2025 19:19:22 +0000 (0:00:01.069) 0:00:23.151 ************ 2025-05-19 19:19:35.810970 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:35.811074 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:35.811088 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:35.811747 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:35.812371 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:35.813773 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:35.814241 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:35.815197 | orchestrator | 2025-05-19 19:19:35.816857 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-19 19:19:35.816922 | orchestrator | Monday 19 May 2025 19:19:35 +0000 (0:00:13.671) 0:00:36.822 ************ 2025-05-19 19:19:35.883740 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:35.912908 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:35.938638 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:35.967643 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:36.037205 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:36.037678 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:36.038331 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:36.038771 | orchestrator | 2025-05-19 19:19:36.039163 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-19 19:19:36.041824 | orchestrator | Monday 19 May 2025 19:19:36 +0000 (0:00:00.231) 0:00:37.053 ************ 2025-05-19 19:19:36.115053 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:36.145564 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:36.170665 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:36.198715 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:36.260238 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:36.261268 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:36.262451 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:36.262864 | orchestrator | 2025-05-19 19:19:36.263924 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-19 19:19:36.264533 | orchestrator | Monday 19 May 2025 19:19:36 +0000 (0:00:00.221) 0:00:37.275 ************ 2025-05-19 19:19:36.337772 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:36.365235 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:36.390562 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:36.418699 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:36.481332 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:36.481937 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:36.482661 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:36.483773 | orchestrator | 2025-05-19 19:19:36.484382 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-19 19:19:36.485033 | orchestrator | Monday 19 May 2025 19:19:36 +0000 (0:00:00.221) 0:00:37.497 ************ 2025-05-19 19:19:36.802806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:36.804245 | orchestrator | 2025-05-19 19:19:36.806004 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-19 19:19:36.806945 | orchestrator | Monday 19 May 2025 19:19:36 +0000 (0:00:00.320) 0:00:37.818 ************ 2025-05-19 19:19:38.412995 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:38.414013 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:38.414853 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:38.415637 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:38.416966 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:38.417406 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:38.418195 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:38.418596 | orchestrator | 2025-05-19 19:19:38.419628 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-19 19:19:38.419844 | orchestrator | Monday 19 May 2025 19:19:38 +0000 (0:00:01.608) 0:00:39.427 ************ 2025-05-19 19:19:39.468808 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:39.468982 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:39.470116 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:39.470903 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:39.471204 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:39.471811 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:39.472336 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:39.472792 | orchestrator | 2025-05-19 19:19:39.473503 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-19 19:19:39.473860 | orchestrator | Monday 19 May 2025 19:19:39 +0000 (0:00:01.055) 0:00:40.483 ************ 2025-05-19 19:19:40.267729 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:40.269016 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:40.270277 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:40.271422 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:40.271959 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:40.272903 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:40.273956 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:40.275102 | orchestrator | 2025-05-19 19:19:40.275399 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-19 19:19:40.276247 | orchestrator | Monday 19 May 2025 19:19:40 +0000 (0:00:00.798) 0:00:41.281 ************ 2025-05-19 19:19:40.564509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:40.565289 | orchestrator | 2025-05-19 19:19:40.566131 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-19 19:19:40.566729 | orchestrator | Monday 19 May 2025 19:19:40 +0000 (0:00:00.298) 0:00:41.580 ************ 2025-05-19 19:19:41.647659 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:41.647773 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:41.647788 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:41.650862 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:41.652748 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:41.653546 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:41.654386 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:41.655155 | orchestrator | 2025-05-19 19:19:41.655937 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-19 19:19:41.656589 | orchestrator | Monday 19 May 2025 19:19:41 +0000 (0:00:01.076) 0:00:42.657 ************ 2025-05-19 19:19:41.714953 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:19:41.752950 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:19:41.792709 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:19:41.823000 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:19:41.978911 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:19:41.979581 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:19:41.981028 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:19:41.981580 | orchestrator | 2025-05-19 19:19:41.982134 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-19 19:19:41.983032 | orchestrator | Monday 19 May 2025 19:19:41 +0000 (0:00:00.336) 0:00:42.993 ************ 2025-05-19 19:19:53.475932 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:53.476058 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:53.476073 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:53.476149 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:53.477219 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:53.478898 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:53.479841 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:53.480687 | orchestrator | 2025-05-19 19:19:53.481339 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-19 19:19:53.482412 | orchestrator | Monday 19 May 2025 19:19:53 +0000 (0:00:11.495) 0:00:54.488 ************ 2025-05-19 19:19:54.661617 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:54.662690 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:54.664151 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:54.665606 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:54.666364 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:54.667381 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:54.668105 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:54.668760 | orchestrator | 2025-05-19 19:19:54.669408 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-19 19:19:54.670158 | orchestrator | Monday 19 May 2025 19:19:54 +0000 (0:00:01.185) 0:00:55.674 ************ 2025-05-19 19:19:55.575110 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:55.575427 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:55.579816 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:55.580403 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:55.581000 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:55.581529 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:55.582479 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:55.582574 | orchestrator | 2025-05-19 19:19:55.582972 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-19 19:19:55.583567 | orchestrator | Monday 19 May 2025 19:19:55 +0000 (0:00:00.915) 0:00:56.590 ************ 2025-05-19 19:19:55.663729 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:55.691321 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:55.724472 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:55.754389 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:55.817771 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:55.818673 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:55.819295 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:55.820059 | orchestrator | 2025-05-19 19:19:55.820766 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-19 19:19:55.821659 | orchestrator | Monday 19 May 2025 19:19:55 +0000 (0:00:00.244) 0:00:56.834 ************ 2025-05-19 19:19:55.900262 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:55.933584 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:55.954647 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:55.981455 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:56.038588 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:56.038906 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:56.039685 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:56.042895 | orchestrator | 2025-05-19 19:19:56.042921 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-19 19:19:56.043646 | orchestrator | Monday 19 May 2025 19:19:56 +0000 (0:00:00.220) 0:00:57.055 ************ 2025-05-19 19:19:56.337798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:19:56.337966 | orchestrator | 2025-05-19 19:19:56.338950 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-19 19:19:56.339271 | orchestrator | Monday 19 May 2025 19:19:56 +0000 (0:00:00.298) 0:00:57.354 ************ 2025-05-19 19:19:57.910140 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:57.910285 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:57.910836 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:57.912192 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:57.913221 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:57.914243 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:57.914563 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:57.914997 | orchestrator | 2025-05-19 19:19:57.915710 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-19 19:19:57.916163 | orchestrator | Monday 19 May 2025 19:19:57 +0000 (0:00:01.570) 0:00:58.924 ************ 2025-05-19 19:19:58.472934 | orchestrator | changed: [testbed-manager] 2025-05-19 19:19:58.473049 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:19:58.473647 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:19:58.475362 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:19:58.476219 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:19:58.477203 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:19:58.478334 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:19:58.478680 | orchestrator | 2025-05-19 19:19:58.479606 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-19 19:19:58.480237 | orchestrator | Monday 19 May 2025 19:19:58 +0000 (0:00:00.563) 0:00:59.487 ************ 2025-05-19 19:19:58.553534 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:58.582777 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:58.611304 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:58.654827 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:58.737101 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:58.737811 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:58.738781 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:58.740365 | orchestrator | 2025-05-19 19:19:58.741335 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-19 19:19:58.741852 | orchestrator | Monday 19 May 2025 19:19:58 +0000 (0:00:00.264) 0:00:59.752 ************ 2025-05-19 19:19:59.662864 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:19:59.663226 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:19:59.663931 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:19:59.664945 | orchestrator | ok: [testbed-manager] 2025-05-19 19:19:59.665130 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:19:59.665384 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:19:59.666278 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:19:59.666304 | orchestrator | 2025-05-19 19:19:59.666755 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-19 19:19:59.667480 | orchestrator | Monday 19 May 2025 19:19:59 +0000 (0:00:00.925) 0:01:00.678 ************ 2025-05-19 19:20:10.239956 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:20:10.240082 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:20:10.240163 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:20:10.242509 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:20:10.243379 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:20:10.244763 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:20:10.244912 | orchestrator | ok: [testbed-manager] 2025-05-19 19:20:10.245645 | orchestrator | 2025-05-19 19:20:10.246542 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-19 19:20:10.247265 | orchestrator | Monday 19 May 2025 19:20:10 +0000 (0:00:10.574) 0:01:11.252 ************ 2025-05-19 19:20:49.684632 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:20:49.684825 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:20:49.684851 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:20:49.684870 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:20:49.684888 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:20:49.685027 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:20:49.685055 | orchestrator | changed: [testbed-manager] 2025-05-19 19:20:49.685106 | orchestrator | 2025-05-19 19:20:49.686004 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-19 19:20:49.687026 | orchestrator | Monday 19 May 2025 19:20:49 +0000 (0:00:39.442) 0:01:50.694 ************ 2025-05-19 19:21:27.527154 | orchestrator | ok: [testbed-manager] 2025-05-19 19:21:27.527295 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:21:27.527444 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:21:27.527877 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:21:27.528940 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:21:27.529335 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:21:27.530154 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:21:27.530543 | orchestrator | 2025-05-19 19:21:27.532210 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-19 19:21:27.532325 | orchestrator | Monday 19 May 2025 19:21:27 +0000 (0:00:37.844) 0:02:28.538 ************ 2025-05-19 19:22:52.923260 | orchestrator | changed: [testbed-manager] 2025-05-19 19:22:52.923355 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:22:52.923369 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:22:52.923704 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:22:52.924664 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:22:52.925180 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:22:52.926072 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:22:52.927708 | orchestrator | 2025-05-19 19:22:52.927730 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-19 19:22:52.927745 | orchestrator | Monday 19 May 2025 19:22:52 +0000 (0:01:25.393) 0:03:53.932 ************ 2025-05-19 19:22:54.548465 | orchestrator | ok: [testbed-manager] 2025-05-19 19:22:54.549149 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:22:54.550735 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:22:54.551492 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:22:54.552826 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:22:54.553297 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:22:54.554526 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:22:54.554857 | orchestrator | 2025-05-19 19:22:54.555836 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-19 19:22:54.556603 | orchestrator | Monday 19 May 2025 19:22:54 +0000 (0:00:01.630) 0:03:55.562 ************ 2025-05-19 19:23:05.984854 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:05.985087 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:05.987542 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:05.989250 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:05.990783 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:05.992084 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:05.993498 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:05.994571 | orchestrator | 2025-05-19 19:23:05.995494 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-19 19:23:05.996363 | orchestrator | Monday 19 May 2025 19:23:05 +0000 (0:00:11.434) 0:04:06.997 ************ 2025-05-19 19:23:06.388202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-19 19:23:06.388491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-19 19:23:06.388625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-19 19:23:06.388997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-19 19:23:06.391246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-19 19:23:06.391500 | orchestrator | 2025-05-19 19:23:06.392300 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-19 19:23:06.393239 | orchestrator | Monday 19 May 2025 19:23:06 +0000 (0:00:00.405) 0:04:07.403 ************ 2025-05-19 19:23:06.444356 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 19:23:06.474807 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:06.474894 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 19:23:06.475619 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 19:23:06.500469 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:23:06.527734 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:23:06.527956 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 19:23:06.555182 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:23:07.085615 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 19:23:07.085728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 19:23:07.087535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 19:23:07.088775 | orchestrator | 2025-05-19 19:23:07.089376 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-19 19:23:07.090326 | orchestrator | Monday 19 May 2025 19:23:07 +0000 (0:00:00.696) 0:04:08.099 ************ 2025-05-19 19:23:07.143894 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 19:23:07.144632 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 19:23:07.146063 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 19:23:07.146976 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 19:23:07.147613 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 19:23:07.148612 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 19:23:07.148930 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 19:23:07.149784 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 19:23:07.150056 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 19:23:07.150654 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 19:23:07.175236 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:07.211493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 19:23:07.211585 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 19:23:07.211599 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 19:23:07.247789 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 19:23:07.247886 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 19:23:07.248064 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 19:23:07.248300 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 19:23:07.249306 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 19:23:07.249548 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 19:23:07.249971 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 19:23:07.250293 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 19:23:07.250669 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 19:23:07.251041 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 19:23:07.251473 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 19:23:07.251814 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 19:23:07.252308 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 19:23:07.252493 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 19:23:07.252839 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 19:23:07.254493 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 19:23:07.292799 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 19:23:07.293629 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:23:07.294163 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 19:23:07.294857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 19:23:07.295257 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 19:23:07.295706 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 19:23:07.296050 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 19:23:07.297722 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 19:23:07.297772 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 19:23:07.297794 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 19:23:07.297938 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 19:23:07.298206 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 19:23:07.317618 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:23:11.833853 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:23:11.834012 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 19:23:11.834087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 19:23:11.834168 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 19:23:11.834661 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 19:23:11.834684 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 19:23:11.836126 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 19:23:11.836161 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 19:23:11.836841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 19:23:11.837127 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 19:23:11.837147 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 19:23:11.837467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 19:23:11.840952 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 19:23:11.840992 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 19:23:11.841003 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 19:23:11.841014 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 19:23:11.841026 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 19:23:11.841037 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 19:23:11.841048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 19:23:11.841059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 19:23:11.841071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 19:23:11.841082 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 19:23:11.841151 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 19:23:11.841166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 19:23:11.841386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 19:23:11.841724 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 19:23:11.842058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 19:23:11.842575 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 19:23:11.842833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 19:23:11.843062 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 19:23:11.843634 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 19:23:11.844062 | orchestrator | 2025-05-19 19:23:11.844470 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-19 19:23:11.844646 | orchestrator | Monday 19 May 2025 19:23:11 +0000 (0:00:04.748) 0:04:12.848 ************ 2025-05-19 19:23:12.479755 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.480208 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.485187 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.485436 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.486106 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.487007 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.488657 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 19:23:12.489272 | orchestrator | 2025-05-19 19:23:12.492808 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-19 19:23:12.492860 | orchestrator | Monday 19 May 2025 19:23:12 +0000 (0:00:00.645) 0:04:13.493 ************ 2025-05-19 19:23:12.542368 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 19:23:12.562779 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:12.639561 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 19:23:13.003031 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:23:13.004152 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 19:23:13.004757 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:23:13.006146 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 19:23:13.007244 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:23:13.007599 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 19:23:13.008686 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 19:23:13.009481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 19:23:13.010988 | orchestrator | 2025-05-19 19:23:13.011109 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-19 19:23:13.011960 | orchestrator | Monday 19 May 2025 19:23:12 +0000 (0:00:00.523) 0:04:14.017 ************ 2025-05-19 19:23:13.067896 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 19:23:13.108500 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:13.200724 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 19:23:13.201127 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 19:23:14.525694 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:23:14.526230 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:23:14.528961 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 19:23:14.529024 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:23:14.529039 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 19:23:14.529431 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 19:23:14.530863 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 19:23:14.531074 | orchestrator | 2025-05-19 19:23:14.531471 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-19 19:23:14.532651 | orchestrator | Monday 19 May 2025 19:23:14 +0000 (0:00:01.523) 0:04:15.540 ************ 2025-05-19 19:23:14.605380 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:14.630127 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:23:14.653207 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:23:14.678364 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:23:14.800478 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:23:14.802336 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:23:14.803211 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:23:14.804698 | orchestrator | 2025-05-19 19:23:14.807456 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-19 19:23:14.808104 | orchestrator | Monday 19 May 2025 19:23:14 +0000 (0:00:00.275) 0:04:15.816 ************ 2025-05-19 19:23:20.801253 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:20.802632 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:20.802671 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:20.803028 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:20.804796 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:20.804819 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:20.804831 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:20.805299 | orchestrator | 2025-05-19 19:23:20.805844 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-19 19:23:20.806265 | orchestrator | Monday 19 May 2025 19:23:20 +0000 (0:00:06.001) 0:04:21.817 ************ 2025-05-19 19:23:20.891142 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-19 19:23:20.891281 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-19 19:23:20.923943 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:20.970652 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:23:20.970829 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-19 19:23:21.021049 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-19 19:23:21.021199 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:23:21.021347 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-19 19:23:21.054865 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:23:21.138859 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:23:21.138969 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-19 19:23:21.139066 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:23:21.139825 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-19 19:23:21.141991 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:23:21.142123 | orchestrator | 2025-05-19 19:23:21.142470 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-19 19:23:21.144219 | orchestrator | Monday 19 May 2025 19:23:21 +0000 (0:00:00.337) 0:04:22.154 ************ 2025-05-19 19:23:22.122282 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-19 19:23:22.122543 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-19 19:23:22.126149 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-19 19:23:22.126988 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-19 19:23:22.127831 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-19 19:23:22.128586 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-19 19:23:22.129164 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-19 19:23:22.130071 | orchestrator | 2025-05-19 19:23:22.130707 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-19 19:23:22.131261 | orchestrator | Monday 19 May 2025 19:23:22 +0000 (0:00:00.981) 0:04:23.136 ************ 2025-05-19 19:23:22.517757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:23:22.519092 | orchestrator | 2025-05-19 19:23:22.519144 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-19 19:23:22.520074 | orchestrator | Monday 19 May 2025 19:23:22 +0000 (0:00:00.395) 0:04:23.531 ************ 2025-05-19 19:23:23.912932 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:23.913588 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:23.914352 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:23.915139 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:23.916049 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:23.916925 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:23.917088 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:23.917731 | orchestrator | 2025-05-19 19:23:23.918072 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-19 19:23:23.918539 | orchestrator | Monday 19 May 2025 19:23:23 +0000 (0:00:01.394) 0:04:24.925 ************ 2025-05-19 19:23:24.597499 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:24.597616 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:24.597631 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:24.598073 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:24.598757 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:24.599918 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:24.600851 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:24.601550 | orchestrator | 2025-05-19 19:23:24.601898 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-19 19:23:24.602696 | orchestrator | Monday 19 May 2025 19:23:24 +0000 (0:00:00.686) 0:04:25.612 ************ 2025-05-19 19:23:25.221770 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:25.221882 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:25.222874 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:25.223536 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:25.224555 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:25.225500 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:25.227546 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:25.227763 | orchestrator | 2025-05-19 19:23:25.228585 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-19 19:23:25.229910 | orchestrator | Monday 19 May 2025 19:23:25 +0000 (0:00:00.625) 0:04:26.237 ************ 2025-05-19 19:23:25.858499 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:25.858613 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:25.861068 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:25.861616 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:25.862261 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:25.862584 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:25.862849 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:25.863610 | orchestrator | 2025-05-19 19:23:25.864473 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-19 19:23:25.865098 | orchestrator | Monday 19 May 2025 19:23:25 +0000 (0:00:00.634) 0:04:26.872 ************ 2025-05-19 19:23:26.827579 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680706.8671844, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.827762 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680737.411081, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.827946 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680746.4995277, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.829014 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680755.837759, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.829050 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680749.4959266, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.829458 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680733.6798692, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.829722 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747680749.153601, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.830118 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680729.1974473, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.830508 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680656.6957726, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.830753 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680675.690647, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.833194 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680660.3373055, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.833708 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680669.6287005, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.834216 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680654.572563, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.834845 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747680668.659634, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:23:26.835185 | orchestrator | 2025-05-19 19:23:26.835720 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-19 19:23:26.836183 | orchestrator | Monday 19 May 2025 19:23:26 +0000 (0:00:00.966) 0:04:27.838 ************ 2025-05-19 19:23:27.944444 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:27.944553 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:27.946649 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:27.946677 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:27.946689 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:27.946700 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:27.946711 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:27.946919 | orchestrator | 2025-05-19 19:23:27.947505 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-19 19:23:27.948160 | orchestrator | Monday 19 May 2025 19:23:27 +0000 (0:00:01.118) 0:04:28.957 ************ 2025-05-19 19:23:29.143557 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:29.145300 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:29.146449 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:29.147338 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:29.150068 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:29.150350 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:29.151504 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:29.152278 | orchestrator | 2025-05-19 19:23:29.153620 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-19 19:23:29.155071 | orchestrator | Monday 19 May 2025 19:23:29 +0000 (0:00:01.201) 0:04:30.159 ************ 2025-05-19 19:23:29.221595 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:23:29.292691 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:23:29.331954 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:23:29.369222 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:23:29.422266 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:23:29.494117 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:23:29.495109 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:23:29.496337 | orchestrator | 2025-05-19 19:23:29.499924 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-19 19:23:29.502618 | orchestrator | Monday 19 May 2025 19:23:29 +0000 (0:00:00.350) 0:04:30.509 ************ 2025-05-19 19:23:30.255549 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:30.255725 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:30.255807 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:30.256156 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:30.256531 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:30.257025 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:30.257537 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:30.257991 | orchestrator | 2025-05-19 19:23:30.258450 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-19 19:23:30.259391 | orchestrator | Monday 19 May 2025 19:23:30 +0000 (0:00:00.762) 0:04:31.272 ************ 2025-05-19 19:23:30.692386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:23:30.692591 | orchestrator | 2025-05-19 19:23:30.693704 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-19 19:23:30.698114 | orchestrator | Monday 19 May 2025 19:23:30 +0000 (0:00:00.434) 0:04:31.706 ************ 2025-05-19 19:23:38.375173 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:38.375684 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:38.376940 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:38.377910 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:38.378610 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:38.379040 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:38.379698 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:38.380330 | orchestrator | 2025-05-19 19:23:38.380990 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-19 19:23:38.381058 | orchestrator | Monday 19 May 2025 19:23:38 +0000 (0:00:07.682) 0:04:39.389 ************ 2025-05-19 19:23:39.558699 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:39.559754 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:39.560591 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:39.561748 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:39.562213 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:39.562793 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:39.564541 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:39.567853 | orchestrator | 2025-05-19 19:23:39.567885 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-19 19:23:39.569700 | orchestrator | Monday 19 May 2025 19:23:39 +0000 (0:00:01.184) 0:04:40.573 ************ 2025-05-19 19:23:41.470988 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:41.472910 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:41.477432 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:41.477504 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:41.480909 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:41.483062 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:41.485378 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:41.485490 | orchestrator | 2025-05-19 19:23:41.485516 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-19 19:23:41.485541 | orchestrator | Monday 19 May 2025 19:23:41 +0000 (0:00:01.909) 0:04:42.483 ************ 2025-05-19 19:23:41.914672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:23:41.915651 | orchestrator | 2025-05-19 19:23:41.921743 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-19 19:23:41.921824 | orchestrator | Monday 19 May 2025 19:23:41 +0000 (0:00:00.445) 0:04:42.929 ************ 2025-05-19 19:23:49.996293 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:49.998249 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:49.998302 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:50.000931 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:50.001133 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:50.005189 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:50.005234 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:50.005248 | orchestrator | 2025-05-19 19:23:50.005263 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-19 19:23:50.007635 | orchestrator | Monday 19 May 2025 19:23:49 +0000 (0:00:08.081) 0:04:51.010 ************ 2025-05-19 19:23:50.368622 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:50.781770 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:50.781885 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:50.782181 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:50.783275 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:50.784355 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:50.787092 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:50.788459 | orchestrator | 2025-05-19 19:23:50.789436 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-19 19:23:50.790097 | orchestrator | Monday 19 May 2025 19:23:50 +0000 (0:00:00.786) 0:04:51.797 ************ 2025-05-19 19:23:52.812253 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:52.812880 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:52.813447 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:52.816048 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:52.816817 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:52.817569 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:52.818130 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:52.818875 | orchestrator | 2025-05-19 19:23:52.819594 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-19 19:23:52.820013 | orchestrator | Monday 19 May 2025 19:23:52 +0000 (0:00:02.029) 0:04:53.826 ************ 2025-05-19 19:23:53.832620 | orchestrator | changed: [testbed-manager] 2025-05-19 19:23:53.833154 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:23:53.836945 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:23:53.836991 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:23:53.837003 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:23:53.837014 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:23:53.837026 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:23:53.837288 | orchestrator | 2025-05-19 19:23:53.837568 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-19 19:23:53.838060 | orchestrator | Monday 19 May 2025 19:23:53 +0000 (0:00:01.021) 0:04:54.847 ************ 2025-05-19 19:23:53.940922 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:53.979077 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:54.017715 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:54.053818 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:54.111986 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:54.113208 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:54.116948 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:54.117021 | orchestrator | 2025-05-19 19:23:54.117036 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-19 19:23:54.117050 | orchestrator | Monday 19 May 2025 19:23:54 +0000 (0:00:00.280) 0:04:55.128 ************ 2025-05-19 19:23:54.194185 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:54.262129 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:54.301667 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:54.339593 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:54.430819 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:54.431030 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:54.432067 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:54.434491 | orchestrator | 2025-05-19 19:23:54.434539 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-19 19:23:54.434553 | orchestrator | Monday 19 May 2025 19:23:54 +0000 (0:00:00.318) 0:04:55.447 ************ 2025-05-19 19:23:54.542489 | orchestrator | ok: [testbed-manager] 2025-05-19 19:23:54.578105 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:23:54.620149 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:23:54.681337 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:23:54.759593 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:23:54.760605 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:23:54.761665 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:23:54.762089 | orchestrator | 2025-05-19 19:23:54.763547 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-19 19:23:54.764496 | orchestrator | Monday 19 May 2025 19:23:54 +0000 (0:00:00.326) 0:04:55.773 ************ 2025-05-19 19:24:00.767364 | orchestrator | ok: [testbed-manager] 2025-05-19 19:24:00.767568 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:24:00.767654 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:24:00.771748 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:24:00.771791 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:24:00.771805 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:24:00.771817 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:24:00.771829 | orchestrator | 2025-05-19 19:24:00.771844 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-19 19:24:00.771857 | orchestrator | Monday 19 May 2025 19:24:00 +0000 (0:00:06.007) 0:05:01.781 ************ 2025-05-19 19:24:01.200507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:24:01.200994 | orchestrator | 2025-05-19 19:24:01.201377 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-19 19:24:01.202490 | orchestrator | Monday 19 May 2025 19:24:01 +0000 (0:00:00.434) 0:05:02.215 ************ 2025-05-19 19:24:01.293061 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.293254 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-19 19:24:01.293273 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.294185 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-19 19:24:01.346836 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:24:01.347016 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.388504 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-19 19:24:01.388829 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:24:01.434988 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:24:01.435162 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.435225 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-19 19:24:01.435777 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.490704 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-19 19:24:01.490903 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:24:01.490922 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.491055 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-19 19:24:01.576520 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:24:01.576742 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:24:01.577834 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-19 19:24:01.582082 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-19 19:24:01.582924 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:24:01.584335 | orchestrator | 2025-05-19 19:24:01.585248 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-19 19:24:01.586100 | orchestrator | Monday 19 May 2025 19:24:01 +0000 (0:00:00.375) 0:05:02.591 ************ 2025-05-19 19:24:01.985814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:24:01.985942 | orchestrator | 2025-05-19 19:24:01.988506 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-19 19:24:01.988947 | orchestrator | Monday 19 May 2025 19:24:01 +0000 (0:00:00.407) 0:05:02.999 ************ 2025-05-19 19:24:02.063516 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-19 19:24:02.099207 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:24:02.099637 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-19 19:24:02.100226 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-19 19:24:02.137868 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:24:02.137977 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-19 19:24:02.200133 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:24:02.200319 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-19 19:24:02.255107 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:24:02.255224 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-19 19:24:02.324027 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:24:02.324797 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:24:02.324882 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-19 19:24:02.325187 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:24:02.326651 | orchestrator | 2025-05-19 19:24:02.327347 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-19 19:24:02.327374 | orchestrator | Monday 19 May 2025 19:24:02 +0000 (0:00:00.340) 0:05:03.339 ************ 2025-05-19 19:24:02.749440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:24:02.750409 | orchestrator | 2025-05-19 19:24:02.750834 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-19 19:24:02.751956 | orchestrator | Monday 19 May 2025 19:24:02 +0000 (0:00:00.425) 0:05:03.764 ************ 2025-05-19 19:24:36.608812 | orchestrator | changed: [testbed-manager] 2025-05-19 19:24:36.608944 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:24:36.608960 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:24:36.609038 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:24:36.610459 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:24:36.610951 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:24:36.611762 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:24:36.612359 | orchestrator | 2025-05-19 19:24:36.613082 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-19 19:24:36.613657 | orchestrator | Monday 19 May 2025 19:24:36 +0000 (0:00:33.854) 0:05:37.619 ************ 2025-05-19 19:24:44.124942 | orchestrator | changed: [testbed-manager] 2025-05-19 19:24:44.125390 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:24:44.125731 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:24:44.130715 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:24:44.130773 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:24:44.130785 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:24:44.132248 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:24:44.137899 | orchestrator | 2025-05-19 19:24:44.143497 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-19 19:24:44.146772 | orchestrator | Monday 19 May 2025 19:24:44 +0000 (0:00:07.519) 0:05:45.139 ************ 2025-05-19 19:24:51.176250 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:24:51.176578 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:24:51.177742 | orchestrator | changed: [testbed-manager] 2025-05-19 19:24:51.179768 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:24:51.180173 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:24:51.180644 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:24:51.181120 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:24:51.181577 | orchestrator | 2025-05-19 19:24:51.182287 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-19 19:24:51.182525 | orchestrator | Monday 19 May 2025 19:24:51 +0000 (0:00:07.050) 0:05:52.189 ************ 2025-05-19 19:24:52.744951 | orchestrator | ok: [testbed-manager] 2025-05-19 19:24:52.745062 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:24:52.746551 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:24:52.747316 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:24:52.748377 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:24:52.749402 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:24:52.751594 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:24:52.751916 | orchestrator | 2025-05-19 19:24:52.752825 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-19 19:24:52.753565 | orchestrator | Monday 19 May 2025 19:24:52 +0000 (0:00:01.571) 0:05:53.760 ************ 2025-05-19 19:24:57.938118 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:24:57.939229 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:24:57.940184 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:24:57.941645 | orchestrator | changed: [testbed-manager] 2025-05-19 19:24:57.942126 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:24:57.942867 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:24:57.943512 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:24:57.945213 | orchestrator | 2025-05-19 19:24:57.946839 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-19 19:24:57.947168 | orchestrator | Monday 19 May 2025 19:24:57 +0000 (0:00:05.192) 0:05:58.953 ************ 2025-05-19 19:24:58.349276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:24:58.349434 | orchestrator | 2025-05-19 19:24:58.349902 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-19 19:24:58.354223 | orchestrator | Monday 19 May 2025 19:24:58 +0000 (0:00:00.410) 0:05:59.364 ************ 2025-05-19 19:24:59.095507 | orchestrator | changed: [testbed-manager] 2025-05-19 19:24:59.095641 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:24:59.095665 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:24:59.095790 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:24:59.095812 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:24:59.096335 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:24:59.096926 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:24:59.097898 | orchestrator | 2025-05-19 19:24:59.098671 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-19 19:24:59.099288 | orchestrator | Monday 19 May 2025 19:24:59 +0000 (0:00:00.744) 0:06:00.109 ************ 2025-05-19 19:25:00.770576 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:00.770950 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:25:00.772187 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:25:00.772738 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:25:00.773490 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:25:00.774734 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:25:00.775785 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:25:00.776193 | orchestrator | 2025-05-19 19:25:00.776969 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-19 19:25:00.777539 | orchestrator | Monday 19 May 2025 19:25:00 +0000 (0:00:01.676) 0:06:01.785 ************ 2025-05-19 19:25:01.545625 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:01.546105 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:01.547237 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:01.547720 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:01.548705 | orchestrator | changed: [testbed-manager] 2025-05-19 19:25:01.549144 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:01.549781 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:01.550194 | orchestrator | 2025-05-19 19:25:01.550694 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-19 19:25:01.551588 | orchestrator | Monday 19 May 2025 19:25:01 +0000 (0:00:00.775) 0:06:02.561 ************ 2025-05-19 19:25:01.665327 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:25:01.704776 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:01.740038 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:01.771936 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:01.837860 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:01.838342 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:01.839797 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:01.840408 | orchestrator | 2025-05-19 19:25:01.840691 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-19 19:25:01.841456 | orchestrator | Monday 19 May 2025 19:25:01 +0000 (0:00:00.293) 0:06:02.854 ************ 2025-05-19 19:25:01.935123 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:25:01.970247 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:02.015118 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:02.044297 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:02.228094 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:02.228273 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:02.228949 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:02.229903 | orchestrator | 2025-05-19 19:25:02.230538 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-19 19:25:02.231196 | orchestrator | Monday 19 May 2025 19:25:02 +0000 (0:00:00.390) 0:06:03.244 ************ 2025-05-19 19:25:02.338112 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:02.364737 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:25:02.412261 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:25:02.444469 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:25:02.502981 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:25:02.503544 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:25:02.504481 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:25:02.505547 | orchestrator | 2025-05-19 19:25:02.506267 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-19 19:25:02.506561 | orchestrator | Monday 19 May 2025 19:25:02 +0000 (0:00:00.274) 0:06:03.519 ************ 2025-05-19 19:25:02.564532 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:25:02.595280 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:02.625214 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:02.654940 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:02.683597 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:02.750942 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:02.751317 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:02.752247 | orchestrator | 2025-05-19 19:25:02.753195 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-19 19:25:02.753712 | orchestrator | Monday 19 May 2025 19:25:02 +0000 (0:00:00.248) 0:06:03.767 ************ 2025-05-19 19:25:02.864222 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:02.896075 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:25:02.932707 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:25:02.978629 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:25:03.062550 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:25:03.064049 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:25:03.065037 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:25:03.066407 | orchestrator | 2025-05-19 19:25:03.067889 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-19 19:25:03.067928 | orchestrator | Monday 19 May 2025 19:25:03 +0000 (0:00:00.310) 0:06:04.077 ************ 2025-05-19 19:25:03.123934 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:25:03.165254 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:03.201244 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:03.238256 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:03.264027 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:03.313620 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:03.313970 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:03.313997 | orchestrator | 2025-05-19 19:25:03.314479 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-19 19:25:03.317774 | orchestrator | Monday 19 May 2025 19:25:03 +0000 (0:00:00.252) 0:06:04.330 ************ 2025-05-19 19:25:03.391853 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:25:03.421913 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:03.454906 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:03.490438 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:03.521340 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:03.584228 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:03.585034 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:03.586791 | orchestrator | 2025-05-19 19:25:03.587679 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-19 19:25:03.588513 | orchestrator | Monday 19 May 2025 19:25:03 +0000 (0:00:00.269) 0:06:04.600 ************ 2025-05-19 19:25:04.135917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:25:04.137619 | orchestrator | 2025-05-19 19:25:04.138821 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-19 19:25:04.139910 | orchestrator | Monday 19 May 2025 19:25:04 +0000 (0:00:00.550) 0:06:05.150 ************ 2025-05-19 19:25:05.135815 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:05.135928 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:25:05.137512 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:25:05.137538 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:25:05.137811 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:25:05.138239 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:25:05.139387 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:25:05.139704 | orchestrator | 2025-05-19 19:25:05.140217 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-19 19:25:05.140594 | orchestrator | Monday 19 May 2025 19:25:05 +0000 (0:00:00.998) 0:06:06.148 ************ 2025-05-19 19:25:07.805938 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:25:07.806458 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:07.806806 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:25:07.807926 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:25:07.809058 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:25:07.810010 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:25:07.810497 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:25:07.810993 | orchestrator | 2025-05-19 19:25:07.811804 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-19 19:25:07.812457 | orchestrator | Monday 19 May 2025 19:25:07 +0000 (0:00:02.672) 0:06:08.821 ************ 2025-05-19 19:25:07.895012 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-19 19:25:07.895131 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-19 19:25:07.895207 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-19 19:25:07.964749 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:25:07.965098 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-19 19:25:07.966112 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-19 19:25:07.966851 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-19 19:25:08.029971 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:08.030135 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-19 19:25:08.030737 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-19 19:25:08.033191 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-19 19:25:08.103628 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:08.103798 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-19 19:25:08.104784 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-19 19:25:08.105188 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-19 19:25:08.170993 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:08.172053 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-19 19:25:08.172755 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-19 19:25:08.173924 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-19 19:25:08.245780 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:08.246491 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-19 19:25:08.247480 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-19 19:25:08.247813 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-19 19:25:08.389752 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:08.390221 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-19 19:25:08.391110 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-19 19:25:08.392057 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-19 19:25:08.394807 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:08.394832 | orchestrator | 2025-05-19 19:25:08.394846 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-19 19:25:08.394860 | orchestrator | Monday 19 May 2025 19:25:08 +0000 (0:00:00.585) 0:06:09.406 ************ 2025-05-19 19:25:14.626004 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:14.626288 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:14.626858 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:14.626883 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:14.627449 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:14.628494 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:14.628514 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:14.629644 | orchestrator | 2025-05-19 19:25:14.630470 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-19 19:25:14.630918 | orchestrator | Monday 19 May 2025 19:25:14 +0000 (0:00:06.233) 0:06:15.640 ************ 2025-05-19 19:25:15.669448 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:15.669653 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:15.670166 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:15.670640 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:15.671473 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:15.674846 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:15.675028 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:15.675615 | orchestrator | 2025-05-19 19:25:15.676361 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-19 19:25:15.676765 | orchestrator | Monday 19 May 2025 19:25:15 +0000 (0:00:01.043) 0:06:16.684 ************ 2025-05-19 19:25:22.731654 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:22.731769 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:22.731779 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:22.731828 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:22.732486 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:22.732624 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:22.733078 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:22.733611 | orchestrator | 2025-05-19 19:25:22.737893 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-19 19:25:22.737909 | orchestrator | Monday 19 May 2025 19:25:22 +0000 (0:00:07.060) 0:06:23.745 ************ 2025-05-19 19:25:25.957466 | orchestrator | changed: [testbed-manager] 2025-05-19 19:25:25.957670 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:25.958496 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:25.959806 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:25.961894 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:25.962569 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:25.963654 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:25.963827 | orchestrator | 2025-05-19 19:25:25.964547 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-19 19:25:25.965252 | orchestrator | Monday 19 May 2025 19:25:25 +0000 (0:00:03.226) 0:06:26.971 ************ 2025-05-19 19:25:27.271633 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:27.273173 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:27.274122 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:27.274146 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:27.274160 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:27.275209 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:27.275462 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:27.276604 | orchestrator | 2025-05-19 19:25:27.277654 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-19 19:25:27.277759 | orchestrator | Monday 19 May 2025 19:25:27 +0000 (0:00:01.314) 0:06:28.285 ************ 2025-05-19 19:25:28.753057 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:28.755734 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:28.757516 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:28.757537 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:28.759776 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:28.760090 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:28.761648 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:28.762165 | orchestrator | 2025-05-19 19:25:28.763910 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-19 19:25:28.763949 | orchestrator | Monday 19 May 2025 19:25:28 +0000 (0:00:01.479) 0:06:29.765 ************ 2025-05-19 19:25:28.955176 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:25:29.022920 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:25:29.092577 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:25:29.157909 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:25:29.335613 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:25:29.336758 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:25:29.337232 | orchestrator | changed: [testbed-manager] 2025-05-19 19:25:29.337853 | orchestrator | 2025-05-19 19:25:29.339429 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-19 19:25:29.339458 | orchestrator | Monday 19 May 2025 19:25:29 +0000 (0:00:00.587) 0:06:30.352 ************ 2025-05-19 19:25:38.875817 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:38.876341 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:38.876565 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:38.877869 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:38.879757 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:38.880492 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:38.881421 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:38.882062 | orchestrator | 2025-05-19 19:25:38.883099 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-19 19:25:38.883751 | orchestrator | Monday 19 May 2025 19:25:38 +0000 (0:00:09.536) 0:06:39.888 ************ 2025-05-19 19:25:39.810439 | orchestrator | changed: [testbed-manager] 2025-05-19 19:25:39.811931 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:39.812609 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:39.812628 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:39.814511 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:39.815183 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:39.815203 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:39.815790 | orchestrator | 2025-05-19 19:25:39.816373 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-19 19:25:39.816860 | orchestrator | Monday 19 May 2025 19:25:39 +0000 (0:00:00.933) 0:06:40.822 ************ 2025-05-19 19:25:51.588680 | orchestrator | ok: [testbed-manager] 2025-05-19 19:25:51.588870 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:25:51.590742 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:25:51.590860 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:25:51.591629 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:25:51.594598 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:25:51.594637 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:25:51.594649 | orchestrator | 2025-05-19 19:25:51.594662 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-19 19:25:51.595454 | orchestrator | Monday 19 May 2025 19:25:51 +0000 (0:00:11.778) 0:06:52.601 ************ 2025-05-19 19:26:03.473515 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:03.473699 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:03.473719 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:03.473732 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:03.473743 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:03.474361 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:03.474478 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:03.474497 | orchestrator | 2025-05-19 19:26:03.474512 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-19 19:26:03.474525 | orchestrator | Monday 19 May 2025 19:26:03 +0000 (0:00:11.883) 0:07:04.485 ************ 2025-05-19 19:26:03.820415 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-19 19:26:04.658781 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-19 19:26:04.658961 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-19 19:26:04.659747 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-19 19:26:04.660543 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-19 19:26:04.661284 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-19 19:26:04.662517 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-19 19:26:04.663523 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-19 19:26:04.664698 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-19 19:26:04.665085 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-19 19:26:04.665902 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-19 19:26:04.667699 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-19 19:26:04.668639 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-19 19:26:04.670517 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-19 19:26:04.670792 | orchestrator | 2025-05-19 19:26:04.671611 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-19 19:26:04.672062 | orchestrator | Monday 19 May 2025 19:26:04 +0000 (0:00:01.189) 0:07:05.674 ************ 2025-05-19 19:26:04.824263 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:04.893035 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:04.960734 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:05.022554 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:05.088956 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:05.210819 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:05.210950 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:05.211531 | orchestrator | 2025-05-19 19:26:05.212547 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-19 19:26:05.213357 | orchestrator | Monday 19 May 2025 19:26:05 +0000 (0:00:00.552) 0:07:06.227 ************ 2025-05-19 19:26:08.568367 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:08.568481 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:08.568497 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:08.570399 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:08.570802 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:08.572020 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:08.572508 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:08.573639 | orchestrator | 2025-05-19 19:26:08.574106 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-19 19:26:08.574855 | orchestrator | Monday 19 May 2025 19:26:08 +0000 (0:00:03.351) 0:07:09.578 ************ 2025-05-19 19:26:08.693202 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:08.753645 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:08.816568 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:09.040674 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:09.106263 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:09.223564 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:09.223863 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:09.224830 | orchestrator | 2025-05-19 19:26:09.225705 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-19 19:26:09.226516 | orchestrator | Monday 19 May 2025 19:26:09 +0000 (0:00:00.657) 0:07:10.236 ************ 2025-05-19 19:26:09.304641 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-19 19:26:09.305108 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-19 19:26:09.375274 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:09.375495 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-19 19:26:09.376010 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-19 19:26:09.445586 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:09.446495 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-19 19:26:09.446991 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-19 19:26:09.519917 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:09.520376 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-19 19:26:09.521559 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-19 19:26:09.592168 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:09.592774 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-19 19:26:09.594113 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-19 19:26:09.659799 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:09.660887 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-19 19:26:09.661897 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-19 19:26:09.782714 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:09.783341 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-19 19:26:09.783744 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-19 19:26:09.785666 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:09.790662 | orchestrator | 2025-05-19 19:26:09.791745 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-19 19:26:09.792848 | orchestrator | Monday 19 May 2025 19:26:09 +0000 (0:00:00.562) 0:07:10.799 ************ 2025-05-19 19:26:09.914665 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:09.998248 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:10.068540 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:10.135151 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:10.205888 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:10.309504 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:10.309618 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:10.309880 | orchestrator | 2025-05-19 19:26:10.310820 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-19 19:26:10.311624 | orchestrator | Monday 19 May 2025 19:26:10 +0000 (0:00:00.524) 0:07:11.324 ************ 2025-05-19 19:26:10.446265 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:10.510805 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:10.574469 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:10.642552 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:10.702791 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:10.796411 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:10.797155 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:10.798242 | orchestrator | 2025-05-19 19:26:10.798810 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-19 19:26:10.799715 | orchestrator | Monday 19 May 2025 19:26:10 +0000 (0:00:00.487) 0:07:11.811 ************ 2025-05-19 19:26:10.924967 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:10.986276 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:11.053636 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:11.116984 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:11.178625 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:11.317040 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:11.317821 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:11.320693 | orchestrator | 2025-05-19 19:26:11.320985 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-19 19:26:11.322426 | orchestrator | Monday 19 May 2025 19:26:11 +0000 (0:00:00.521) 0:07:12.332 ************ 2025-05-19 19:26:17.124608 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:17.124821 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:17.125605 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:17.126689 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:17.126712 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:17.126929 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:17.128508 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:17.128735 | orchestrator | 2025-05-19 19:26:17.129960 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-19 19:26:17.130355 | orchestrator | Monday 19 May 2025 19:26:17 +0000 (0:00:05.805) 0:07:18.138 ************ 2025-05-19 19:26:17.944284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:26:17.944731 | orchestrator | 2025-05-19 19:26:17.945787 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-19 19:26:17.946731 | orchestrator | Monday 19 May 2025 19:26:17 +0000 (0:00:00.820) 0:07:18.958 ************ 2025-05-19 19:26:18.365217 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:18.820779 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:18.820928 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:18.821783 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:18.822102 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:18.822751 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:18.823191 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:18.824222 | orchestrator | 2025-05-19 19:26:18.824666 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-19 19:26:18.825330 | orchestrator | Monday 19 May 2025 19:26:18 +0000 (0:00:00.876) 0:07:19.834 ************ 2025-05-19 19:26:19.216719 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:19.659109 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:19.659819 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:19.660099 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:19.660596 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:19.661195 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:19.661836 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:19.662507 | orchestrator | 2025-05-19 19:26:19.662911 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-19 19:26:19.663565 | orchestrator | Monday 19 May 2025 19:26:19 +0000 (0:00:00.838) 0:07:20.673 ************ 2025-05-19 19:26:21.238935 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:21.241094 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:21.242509 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:21.243500 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:21.244157 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:21.244646 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:21.245247 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:21.246944 | orchestrator | 2025-05-19 19:26:21.247443 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-19 19:26:21.248097 | orchestrator | Monday 19 May 2025 19:26:21 +0000 (0:00:01.580) 0:07:22.254 ************ 2025-05-19 19:26:21.365659 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:22.590131 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:22.590394 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:22.590783 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:22.594207 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:22.594954 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:22.595624 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:22.596631 | orchestrator | 2025-05-19 19:26:22.597394 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-19 19:26:22.598189 | orchestrator | Monday 19 May 2025 19:26:22 +0000 (0:00:01.349) 0:07:23.603 ************ 2025-05-19 19:26:23.913512 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:23.914104 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:23.915332 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:23.915757 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:23.916441 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:23.916911 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:23.917944 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:23.918563 | orchestrator | 2025-05-19 19:26:23.918800 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-19 19:26:23.919998 | orchestrator | Monday 19 May 2025 19:26:23 +0000 (0:00:01.322) 0:07:24.926 ************ 2025-05-19 19:26:25.337146 | orchestrator | changed: [testbed-manager] 2025-05-19 19:26:25.337381 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:25.337853 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:25.338429 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:25.339667 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:25.340037 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:25.340455 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:25.341734 | orchestrator | 2025-05-19 19:26:25.341780 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-19 19:26:25.341827 | orchestrator | Monday 19 May 2025 19:26:25 +0000 (0:00:01.426) 0:07:26.352 ************ 2025-05-19 19:26:26.358132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:26:26.358433 | orchestrator | 2025-05-19 19:26:26.359332 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-19 19:26:26.359834 | orchestrator | Monday 19 May 2025 19:26:26 +0000 (0:00:01.017) 0:07:27.370 ************ 2025-05-19 19:26:27.816203 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:27.816398 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:27.816823 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:27.817788 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:27.820579 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:27.820633 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:27.821486 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:27.821889 | orchestrator | 2025-05-19 19:26:27.822864 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-19 19:26:27.824361 | orchestrator | Monday 19 May 2025 19:26:27 +0000 (0:00:01.460) 0:07:28.830 ************ 2025-05-19 19:26:28.962487 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:28.962937 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:28.963580 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:28.965024 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:28.965589 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:28.966580 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:28.966948 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:28.967117 | orchestrator | 2025-05-19 19:26:28.967486 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-19 19:26:28.967811 | orchestrator | Monday 19 May 2025 19:26:28 +0000 (0:00:01.144) 0:07:29.974 ************ 2025-05-19 19:26:30.153608 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:30.153767 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:30.153867 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:30.154447 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:30.157794 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:30.158342 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:30.158873 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:30.159646 | orchestrator | 2025-05-19 19:26:30.159994 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-19 19:26:30.160670 | orchestrator | Monday 19 May 2025 19:26:30 +0000 (0:00:01.194) 0:07:31.169 ************ 2025-05-19 19:26:31.528224 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:31.529782 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:31.529984 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:31.531171 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:31.531885 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:31.533346 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:31.534086 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:31.534808 | orchestrator | 2025-05-19 19:26:31.535506 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-19 19:26:31.536165 | orchestrator | Monday 19 May 2025 19:26:31 +0000 (0:00:01.369) 0:07:32.538 ************ 2025-05-19 19:26:32.665580 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:26:32.665694 | orchestrator | 2025-05-19 19:26:32.665714 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.665859 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.863) 0:07:33.402 ************ 2025-05-19 19:26:32.666502 | orchestrator | 2025-05-19 19:26:32.670551 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.670601 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.037) 0:07:33.439 ************ 2025-05-19 19:26:32.670812 | orchestrator | 2025-05-19 19:26:32.671067 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.671405 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.043) 0:07:33.483 ************ 2025-05-19 19:26:32.672163 | orchestrator | 2025-05-19 19:26:32.674707 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.676549 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.036) 0:07:33.519 ************ 2025-05-19 19:26:32.677078 | orchestrator | 2025-05-19 19:26:32.677488 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.677883 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.036) 0:07:33.556 ************ 2025-05-19 19:26:32.678369 | orchestrator | 2025-05-19 19:26:32.678724 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.679653 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.042) 0:07:33.599 ************ 2025-05-19 19:26:32.680027 | orchestrator | 2025-05-19 19:26:32.680466 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 19:26:32.680851 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.038) 0:07:33.637 ************ 2025-05-19 19:26:32.683541 | orchestrator | 2025-05-19 19:26:32.683862 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 19:26:32.684443 | orchestrator | Monday 19 May 2025 19:26:32 +0000 (0:00:00.040) 0:07:33.677 ************ 2025-05-19 19:26:33.780600 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:33.780781 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:33.781445 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:33.783468 | orchestrator | 2025-05-19 19:26:33.783597 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-19 19:26:33.784404 | orchestrator | Monday 19 May 2025 19:26:33 +0000 (0:00:01.116) 0:07:34.794 ************ 2025-05-19 19:26:35.389316 | orchestrator | changed: [testbed-manager] 2025-05-19 19:26:35.389551 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:35.389945 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:35.391486 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:35.392111 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:35.392719 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:35.393422 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:35.393918 | orchestrator | 2025-05-19 19:26:35.394611 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-19 19:26:35.395247 | orchestrator | Monday 19 May 2025 19:26:35 +0000 (0:00:01.605) 0:07:36.399 ************ 2025-05-19 19:26:36.530599 | orchestrator | changed: [testbed-manager] 2025-05-19 19:26:36.530725 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:36.530804 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:36.531562 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:36.531786 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:36.533239 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:36.533647 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:36.534176 | orchestrator | 2025-05-19 19:26:36.535002 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-19 19:26:36.535680 | orchestrator | Monday 19 May 2025 19:26:36 +0000 (0:00:01.143) 0:07:37.543 ************ 2025-05-19 19:26:36.664839 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:38.547414 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:38.547631 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:38.548647 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:38.549753 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:38.550719 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:38.551635 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:38.552350 | orchestrator | 2025-05-19 19:26:38.552748 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-19 19:26:38.553344 | orchestrator | Monday 19 May 2025 19:26:38 +0000 (0:00:02.015) 0:07:39.559 ************ 2025-05-19 19:26:38.648692 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:38.648840 | orchestrator | 2025-05-19 19:26:38.649374 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-19 19:26:38.649955 | orchestrator | Monday 19 May 2025 19:26:38 +0000 (0:00:00.102) 0:07:39.661 ************ 2025-05-19 19:26:39.624776 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:39.624943 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:39.625575 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:39.626587 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:39.626993 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:39.627151 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:39.628000 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:39.628653 | orchestrator | 2025-05-19 19:26:39.629565 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-19 19:26:39.630667 | orchestrator | Monday 19 May 2025 19:26:39 +0000 (0:00:00.977) 0:07:40.638 ************ 2025-05-19 19:26:39.765056 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:39.827348 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:39.893882 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:39.966849 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:40.028618 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:40.333076 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:40.333180 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:40.333195 | orchestrator | 2025-05-19 19:26:40.333209 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-19 19:26:40.333842 | orchestrator | Monday 19 May 2025 19:26:40 +0000 (0:00:00.707) 0:07:41.346 ************ 2025-05-19 19:26:41.232667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:26:41.232899 | orchestrator | 2025-05-19 19:26:41.233047 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-19 19:26:41.234209 | orchestrator | Monday 19 May 2025 19:26:41 +0000 (0:00:00.899) 0:07:42.245 ************ 2025-05-19 19:26:42.069817 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:42.069952 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:42.071986 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:42.077665 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:42.077706 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:42.077719 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:42.079216 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:42.079252 | orchestrator | 2025-05-19 19:26:42.079266 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-19 19:26:42.079679 | orchestrator | Monday 19 May 2025 19:26:42 +0000 (0:00:00.829) 0:07:43.075 ************ 2025-05-19 19:26:44.861856 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-19 19:26:44.862598 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-19 19:26:44.863846 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-19 19:26:44.864219 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-19 19:26:44.865396 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-19 19:26:44.866106 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-19 19:26:44.866572 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-19 19:26:44.867593 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-19 19:26:44.868260 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-19 19:26:44.869034 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-19 19:26:44.871617 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-19 19:26:44.871722 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-19 19:26:44.871743 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-19 19:26:44.871762 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-19 19:26:44.871854 | orchestrator | 2025-05-19 19:26:44.872709 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-19 19:26:44.873180 | orchestrator | Monday 19 May 2025 19:26:44 +0000 (0:00:02.797) 0:07:45.872 ************ 2025-05-19 19:26:45.010448 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:45.077165 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:45.146175 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:45.210173 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:45.274394 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:45.378870 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:45.379095 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:45.380204 | orchestrator | 2025-05-19 19:26:45.381541 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-19 19:26:45.382190 | orchestrator | Monday 19 May 2025 19:26:45 +0000 (0:00:00.522) 0:07:46.395 ************ 2025-05-19 19:26:46.182763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:26:46.183897 | orchestrator | 2025-05-19 19:26:46.186943 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-19 19:26:46.186977 | orchestrator | Monday 19 May 2025 19:26:46 +0000 (0:00:00.801) 0:07:47.196 ************ 2025-05-19 19:26:46.996020 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:46.996714 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:46.997581 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:46.998142 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:47.000530 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:47.001320 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:47.002085 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:47.002612 | orchestrator | 2025-05-19 19:26:47.003738 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-19 19:26:47.004116 | orchestrator | Monday 19 May 2025 19:26:46 +0000 (0:00:00.812) 0:07:48.009 ************ 2025-05-19 19:26:47.485000 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:47.551980 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:47.626847 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:48.015757 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:48.015861 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:48.015937 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:48.017205 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:48.017228 | orchestrator | 2025-05-19 19:26:48.017543 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-19 19:26:48.019431 | orchestrator | Monday 19 May 2025 19:26:48 +0000 (0:00:01.019) 0:07:49.028 ************ 2025-05-19 19:26:48.151014 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:48.214826 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:48.277962 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:48.360835 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:48.423887 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:48.506504 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:48.506635 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:48.506705 | orchestrator | 2025-05-19 19:26:48.506857 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-19 19:26:48.507161 | orchestrator | Monday 19 May 2025 19:26:48 +0000 (0:00:00.493) 0:07:49.522 ************ 2025-05-19 19:26:49.868115 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:49.868227 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:49.868947 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:49.870140 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:49.871027 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:49.871283 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:49.873535 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:49.874839 | orchestrator | 2025-05-19 19:26:49.875834 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-19 19:26:49.877209 | orchestrator | Monday 19 May 2025 19:26:49 +0000 (0:00:01.360) 0:07:50.882 ************ 2025-05-19 19:26:50.000916 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:50.070287 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:50.136389 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:50.200785 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:50.266233 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:50.367629 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:50.368687 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:50.369198 | orchestrator | 2025-05-19 19:26:50.370383 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-19 19:26:50.371127 | orchestrator | Monday 19 May 2025 19:26:50 +0000 (0:00:00.496) 0:07:51.379 ************ 2025-05-19 19:26:52.128782 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:52.129421 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:52.129966 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:52.131835 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:52.132664 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:52.133047 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:52.133645 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:52.134770 | orchestrator | 2025-05-19 19:26:52.134971 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-19 19:26:52.135584 | orchestrator | Monday 19 May 2025 19:26:52 +0000 (0:00:01.763) 0:07:53.142 ************ 2025-05-19 19:26:53.623898 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:53.625004 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:53.625973 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:53.630297 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:53.630351 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:53.630393 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:53.631184 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:53.632057 | orchestrator | 2025-05-19 19:26:53.633708 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-19 19:26:53.634725 | orchestrator | Monday 19 May 2025 19:26:53 +0000 (0:00:01.496) 0:07:54.638 ************ 2025-05-19 19:26:55.509705 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:55.510324 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:55.511451 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:55.512662 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:55.514255 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:55.515152 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:55.515890 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:55.516365 | orchestrator | 2025-05-19 19:26:55.516962 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-19 19:26:55.517814 | orchestrator | Monday 19 May 2025 19:26:55 +0000 (0:00:01.883) 0:07:56.522 ************ 2025-05-19 19:26:57.184976 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:57.190560 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:26:57.190637 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:26:57.191343 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:26:57.192405 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:26:57.193147 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:26:57.193600 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:26:57.194595 | orchestrator | 2025-05-19 19:26:57.195024 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 19:26:57.195522 | orchestrator | Monday 19 May 2025 19:26:57 +0000 (0:00:01.674) 0:07:58.197 ************ 2025-05-19 19:26:58.217956 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:58.218437 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:58.219246 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:58.219304 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:58.219383 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:26:58.221243 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:26:58.221932 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:26:58.222633 | orchestrator | 2025-05-19 19:26:58.223404 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 19:26:58.224194 | orchestrator | Monday 19 May 2025 19:26:58 +0000 (0:00:01.034) 0:07:59.231 ************ 2025-05-19 19:26:58.355819 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:58.423998 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:58.487500 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:58.556818 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:58.618851 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:59.028330 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:59.029156 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:59.036852 | orchestrator | 2025-05-19 19:26:59.037013 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-19 19:26:59.037042 | orchestrator | Monday 19 May 2025 19:26:59 +0000 (0:00:00.810) 0:08:00.042 ************ 2025-05-19 19:26:59.173743 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:26:59.245209 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:26:59.308403 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:26:59.369593 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:26:59.439696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:26:59.540396 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:26:59.541777 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:26:59.542684 | orchestrator | 2025-05-19 19:26:59.543061 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-19 19:26:59.544622 | orchestrator | Monday 19 May 2025 19:26:59 +0000 (0:00:00.512) 0:08:00.554 ************ 2025-05-19 19:26:59.674799 | orchestrator | ok: [testbed-manager] 2025-05-19 19:26:59.750867 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:26:59.829670 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:26:59.899074 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:26:59.961509 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:00.067493 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:00.068347 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:00.069713 | orchestrator | 2025-05-19 19:27:00.072930 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-19 19:27:00.073690 | orchestrator | Monday 19 May 2025 19:27:00 +0000 (0:00:00.527) 0:08:01.082 ************ 2025-05-19 19:27:00.201916 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:00.264877 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:00.506328 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:00.570443 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:00.637405 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:00.753186 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:00.754211 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:00.755699 | orchestrator | 2025-05-19 19:27:00.756520 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-19 19:27:00.758625 | orchestrator | Monday 19 May 2025 19:27:00 +0000 (0:00:00.686) 0:08:01.768 ************ 2025-05-19 19:27:00.881840 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:00.952957 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:01.016752 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:01.081335 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:01.151284 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:01.254527 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:01.255366 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:01.255981 | orchestrator | 2025-05-19 19:27:01.257033 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-19 19:27:01.258132 | orchestrator | Monday 19 May 2025 19:27:01 +0000 (0:00:00.501) 0:08:02.269 ************ 2025-05-19 19:27:07.002920 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:07.004074 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:07.004887 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:07.005379 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:07.006490 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:07.007965 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:07.008047 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:07.008709 | orchestrator | 2025-05-19 19:27:07.009354 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-19 19:27:07.010300 | orchestrator | Monday 19 May 2025 19:27:06 +0000 (0:00:05.747) 0:08:08.017 ************ 2025-05-19 19:27:07.151774 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:27:07.216590 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:27:07.276080 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:27:07.345876 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:27:07.407335 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:27:07.516552 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:27:07.517607 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:27:07.517795 | orchestrator | 2025-05-19 19:27:07.518573 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-19 19:27:07.521369 | orchestrator | Monday 19 May 2025 19:27:07 +0000 (0:00:00.513) 0:08:08.531 ************ 2025-05-19 19:27:08.463518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:27:08.463711 | orchestrator | 2025-05-19 19:27:08.464402 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-19 19:27:08.465315 | orchestrator | Monday 19 May 2025 19:27:08 +0000 (0:00:00.946) 0:08:09.477 ************ 2025-05-19 19:27:10.202164 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:10.202240 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:10.202805 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:10.203042 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:10.203732 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:10.204461 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:10.205118 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:10.205578 | orchestrator | 2025-05-19 19:27:10.205936 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-19 19:27:10.206503 | orchestrator | Monday 19 May 2025 19:27:10 +0000 (0:00:01.735) 0:08:11.213 ************ 2025-05-19 19:27:11.323001 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:11.323205 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:11.324378 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:11.325336 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:11.325752 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:11.327174 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:11.327487 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:11.328266 | orchestrator | 2025-05-19 19:27:11.328760 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-19 19:27:11.329647 | orchestrator | Monday 19 May 2025 19:27:11 +0000 (0:00:01.124) 0:08:12.337 ************ 2025-05-19 19:27:11.738188 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:12.176363 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:12.176496 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:12.176525 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:12.176539 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:12.176551 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:12.176635 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:12.176651 | orchestrator | 2025-05-19 19:27:12.177360 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-19 19:27:12.177406 | orchestrator | Monday 19 May 2025 19:27:12 +0000 (0:00:00.850) 0:08:13.188 ************ 2025-05-19 19:27:14.162796 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.165407 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.167975 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.168025 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.168200 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.168974 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.169410 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 19:27:14.169912 | orchestrator | 2025-05-19 19:27:14.170701 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-19 19:27:14.171330 | orchestrator | Monday 19 May 2025 19:27:14 +0000 (0:00:01.988) 0:08:15.176 ************ 2025-05-19 19:27:14.967214 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:27:14.967439 | orchestrator | 2025-05-19 19:27:14.968044 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-19 19:27:14.968514 | orchestrator | Monday 19 May 2025 19:27:14 +0000 (0:00:00.806) 0:08:15.982 ************ 2025-05-19 19:27:23.883557 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:23.885719 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:23.887185 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:23.888197 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:23.888870 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:23.890328 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:23.890366 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:23.891034 | orchestrator | 2025-05-19 19:27:23.891773 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-19 19:27:23.891948 | orchestrator | Monday 19 May 2025 19:27:23 +0000 (0:00:08.913) 0:08:24.896 ************ 2025-05-19 19:27:25.885006 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:25.885115 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:25.885189 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:25.885206 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:25.886155 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:25.889031 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:25.889490 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:25.889682 | orchestrator | 2025-05-19 19:27:25.890103 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-19 19:27:25.890327 | orchestrator | Monday 19 May 2025 19:27:25 +0000 (0:00:02.002) 0:08:26.898 ************ 2025-05-19 19:27:27.138351 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:27.138876 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:27.139922 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:27.141964 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:27.143725 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:27.144723 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:27.145115 | orchestrator | 2025-05-19 19:27:27.146368 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-19 19:27:27.148583 | orchestrator | Monday 19 May 2025 19:27:27 +0000 (0:00:01.253) 0:08:28.151 ************ 2025-05-19 19:27:28.354317 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:28.355685 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:28.356675 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:28.357933 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:28.357965 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:28.358171 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:28.359095 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:28.359556 | orchestrator | 2025-05-19 19:27:28.359967 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-19 19:27:28.360293 | orchestrator | 2025-05-19 19:27:28.362496 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-19 19:27:28.363309 | orchestrator | Monday 19 May 2025 19:27:28 +0000 (0:00:01.217) 0:08:29.369 ************ 2025-05-19 19:27:28.663817 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:27:28.725109 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:27:28.801420 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:27:28.860419 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:27:28.919007 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:27:29.036012 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:27:29.036751 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:27:29.036883 | orchestrator | 2025-05-19 19:27:29.037976 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-19 19:27:29.038302 | orchestrator | 2025-05-19 19:27:29.041509 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-19 19:27:29.041540 | orchestrator | Monday 19 May 2025 19:27:29 +0000 (0:00:00.681) 0:08:30.051 ************ 2025-05-19 19:27:30.392081 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:30.394373 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:30.394621 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:30.395096 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:30.404091 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:30.406126 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:30.406391 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:30.407121 | orchestrator | 2025-05-19 19:27:30.407556 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-19 19:27:30.408359 | orchestrator | Monday 19 May 2025 19:27:30 +0000 (0:00:01.353) 0:08:31.404 ************ 2025-05-19 19:27:31.797872 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:31.798000 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:31.798927 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:31.802533 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:31.802612 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:31.802627 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:31.802638 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:31.802651 | orchestrator | 2025-05-19 19:27:31.802715 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-19 19:27:31.803045 | orchestrator | Monday 19 May 2025 19:27:31 +0000 (0:00:01.407) 0:08:32.811 ************ 2025-05-19 19:27:31.927085 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:27:31.987756 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:27:32.045703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:27:32.263032 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:27:32.323683 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:27:32.757415 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:27:32.757564 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:27:32.758164 | orchestrator | 2025-05-19 19:27:32.758721 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-19 19:27:32.763942 | orchestrator | Monday 19 May 2025 19:27:32 +0000 (0:00:00.959) 0:08:33.771 ************ 2025-05-19 19:27:33.981924 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:33.982094 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:33.983455 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:33.983479 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:33.985629 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:33.985650 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:33.985662 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:33.985673 | orchestrator | 2025-05-19 19:27:33.985932 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-19 19:27:33.987080 | orchestrator | 2025-05-19 19:27:33.987678 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-19 19:27:33.988433 | orchestrator | Monday 19 May 2025 19:27:33 +0000 (0:00:01.224) 0:08:34.996 ************ 2025-05-19 19:27:34.775643 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:27:34.775930 | orchestrator | 2025-05-19 19:27:34.777375 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-19 19:27:34.777712 | orchestrator | Monday 19 May 2025 19:27:34 +0000 (0:00:00.792) 0:08:35.788 ************ 2025-05-19 19:27:35.228081 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:35.296111 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:35.367882 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:35.782779 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:35.783010 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:35.785047 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:35.785955 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:35.786385 | orchestrator | 2025-05-19 19:27:35.787009 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-19 19:27:35.787759 | orchestrator | Monday 19 May 2025 19:27:35 +0000 (0:00:01.006) 0:08:36.795 ************ 2025-05-19 19:27:36.862620 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:36.862810 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:36.866867 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:36.866968 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:36.866984 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:36.867485 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:36.867510 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:36.868055 | orchestrator | 2025-05-19 19:27:36.868723 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-19 19:27:36.869398 | orchestrator | Monday 19 May 2025 19:27:36 +0000 (0:00:01.079) 0:08:37.874 ************ 2025-05-19 19:27:37.652007 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:27:37.652112 | orchestrator | 2025-05-19 19:27:37.652184 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-19 19:27:37.652731 | orchestrator | Monday 19 May 2025 19:27:37 +0000 (0:00:00.785) 0:08:38.659 ************ 2025-05-19 19:27:38.110445 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:38.704854 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:38.705207 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:38.706136 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:38.706849 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:38.707089 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:38.708120 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:38.708719 | orchestrator | 2025-05-19 19:27:38.709202 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-19 19:27:38.709721 | orchestrator | Monday 19 May 2025 19:27:38 +0000 (0:00:01.059) 0:08:39.719 ************ 2025-05-19 19:27:39.126011 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:39.798349 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:39.799376 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:39.800347 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:39.801651 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:39.801748 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:39.802804 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:39.803726 | orchestrator | 2025-05-19 19:27:39.804412 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:27:39.804899 | orchestrator | 2025-05-19 19:27:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:27:39.805110 | orchestrator | 2025-05-19 19:27:39 | INFO  | Please wait and do not abort execution. 2025-05-19 19:27:39.806091 | orchestrator | testbed-manager : ok=160  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-19 19:27:39.806503 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 19:27:39.807347 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 19:27:39.807577 | orchestrator | testbed-node-2 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 19:27:39.808520 | orchestrator | testbed-node-3 : ok=167  changed=62  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-19 19:27:39.808972 | orchestrator | testbed-node-4 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 19:27:39.809314 | orchestrator | testbed-node-5 : ok=167  changed=62  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 19:27:39.809986 | orchestrator | 2025-05-19 19:27:39.810683 | orchestrator | Monday 19 May 2025 19:27:39 +0000 (0:00:01.093) 0:08:40.812 ************ 2025-05-19 19:27:39.811112 | orchestrator | =============================================================================== 2025-05-19 19:27:39.811632 | orchestrator | osism.commons.packages : Install required packages --------------------- 85.39s 2025-05-19 19:27:39.812091 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 39.44s 2025-05-19 19:27:39.812728 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.84s 2025-05-19 19:27:39.813104 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.85s 2025-05-19 19:27:39.813791 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.67s 2025-05-19 19:27:39.813878 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.88s 2025-05-19 19:27:39.814513 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 11.78s 2025-05-19 19:27:39.814636 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.50s 2025-05-19 19:27:39.815183 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.43s 2025-05-19 19:27:39.816527 | orchestrator | osism.commons.packages : Download upgrade packages --------------------- 10.57s 2025-05-19 19:27:39.816841 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.54s 2025-05-19 19:27:39.817916 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.91s 2025-05-19 19:27:39.818831 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.08s 2025-05-19 19:27:39.819923 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.68s 2025-05-19 19:27:39.820990 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.52s 2025-05-19 19:27:39.822285 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.06s 2025-05-19 19:27:39.822963 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.05s 2025-05-19 19:27:39.823575 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.23s 2025-05-19 19:27:39.824120 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.01s 2025-05-19 19:27:39.824875 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.00s 2025-05-19 19:27:40.465901 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-19 19:27:40.466005 | orchestrator | + osism apply network 2025-05-19 19:27:42.444456 | orchestrator | 2025-05-19 19:27:42 | INFO  | Task d6cec556-fcaf-475b-a164-811cafe78409 (network) was prepared for execution. 2025-05-19 19:27:42.444631 | orchestrator | 2025-05-19 19:27:42 | INFO  | It takes a moment until task d6cec556-fcaf-475b-a164-811cafe78409 (network) has been started and output is visible here. 2025-05-19 19:27:45.737901 | orchestrator | 2025-05-19 19:27:45.738104 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-19 19:27:45.741748 | orchestrator | 2025-05-19 19:27:45.741789 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-19 19:27:45.741798 | orchestrator | Monday 19 May 2025 19:27:45 +0000 (0:00:00.204) 0:00:00.204 ************ 2025-05-19 19:27:45.889656 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:45.976330 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:46.053987 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:46.129393 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:46.205801 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:46.429968 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:46.430310 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:46.430728 | orchestrator | 2025-05-19 19:27:46.431912 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-19 19:27:46.432624 | orchestrator | Monday 19 May 2025 19:27:46 +0000 (0:00:00.692) 0:00:00.896 ************ 2025-05-19 19:27:47.577061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:27:47.581531 | orchestrator | 2025-05-19 19:27:47.581576 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-19 19:27:47.582764 | orchestrator | Monday 19 May 2025 19:27:47 +0000 (0:00:01.145) 0:00:02.042 ************ 2025-05-19 19:27:49.479667 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:49.479973 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:49.483218 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:49.484288 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:49.485091 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:49.485486 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:49.486283 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:49.486664 | orchestrator | 2025-05-19 19:27:49.487950 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-19 19:27:49.488270 | orchestrator | Monday 19 May 2025 19:27:49 +0000 (0:00:01.904) 0:00:03.946 ************ 2025-05-19 19:27:51.133980 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:51.134368 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:51.137053 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:51.137155 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:51.137889 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:51.138403 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:51.138926 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:51.139484 | orchestrator | 2025-05-19 19:27:51.139851 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-19 19:27:51.140494 | orchestrator | Monday 19 May 2025 19:27:51 +0000 (0:00:01.651) 0:00:05.598 ************ 2025-05-19 19:27:51.644188 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-19 19:27:52.328299 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-19 19:27:52.329626 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-19 19:27:52.330798 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-19 19:27:52.334821 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-19 19:27:52.334852 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-19 19:27:52.334864 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-19 19:27:52.334876 | orchestrator | 2025-05-19 19:27:52.334890 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-19 19:27:52.335911 | orchestrator | Monday 19 May 2025 19:27:52 +0000 (0:00:01.195) 0:00:06.793 ************ 2025-05-19 19:27:54.087454 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:27:54.088188 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:27:54.088559 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 19:27:54.089498 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 19:27:54.090211 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 19:27:54.097805 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 19:27:54.097845 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 19:27:54.098622 | orchestrator | 2025-05-19 19:27:54.099062 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-19 19:27:54.099680 | orchestrator | Monday 19 May 2025 19:27:54 +0000 (0:00:01.760) 0:00:08.553 ************ 2025-05-19 19:27:55.812041 | orchestrator | changed: [testbed-manager] 2025-05-19 19:27:55.812265 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:27:55.814392 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:27:55.815431 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:27:55.817215 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:27:55.818680 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:27:55.819528 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:27:55.821504 | orchestrator | 2025-05-19 19:27:55.822255 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-19 19:27:55.823123 | orchestrator | Monday 19 May 2025 19:27:55 +0000 (0:00:01.719) 0:00:10.273 ************ 2025-05-19 19:27:56.332759 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:27:56.414257 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:27:56.853407 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 19:27:56.855358 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 19:27:56.858877 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 19:27:56.859983 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 19:27:56.861079 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 19:27:56.862103 | orchestrator | 2025-05-19 19:27:56.862678 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-19 19:27:56.863633 | orchestrator | Monday 19 May 2025 19:27:56 +0000 (0:00:01.048) 0:00:11.322 ************ 2025-05-19 19:27:57.299926 | orchestrator | ok: [testbed-manager] 2025-05-19 19:27:57.387330 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:27:57.975088 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:27:57.975298 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:27:57.978903 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:27:57.978930 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:27:57.978941 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:27:57.978953 | orchestrator | 2025-05-19 19:27:57.979728 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-19 19:27:57.980899 | orchestrator | Monday 19 May 2025 19:27:57 +0000 (0:00:01.116) 0:00:12.439 ************ 2025-05-19 19:27:58.150440 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:27:58.228727 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:27:58.310573 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:27:58.401189 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:27:58.491892 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:27:58.779490 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:27:58.779667 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:27:58.781466 | orchestrator | 2025-05-19 19:27:58.784656 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-19 19:27:58.784682 | orchestrator | Monday 19 May 2025 19:27:58 +0000 (0:00:00.805) 0:00:13.244 ************ 2025-05-19 19:28:00.868068 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:00.868166 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:28:00.868415 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:28:00.870095 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:28:00.870996 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:28:00.871811 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:28:00.872774 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:28:00.873420 | orchestrator | 2025-05-19 19:28:00.874181 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-19 19:28:00.874776 | orchestrator | Monday 19 May 2025 19:28:00 +0000 (0:00:02.083) 0:00:15.328 ************ 2025-05-19 19:28:02.595436 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-19 19:28:02.596288 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.597041 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.598176 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.599357 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.600254 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.600810 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.601991 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-19 19:28:02.602747 | orchestrator | 2025-05-19 19:28:02.603762 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-19 19:28:02.604088 | orchestrator | Monday 19 May 2025 19:28:02 +0000 (0:00:01.730) 0:00:17.058 ************ 2025-05-19 19:28:04.081310 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:04.082519 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:28:04.083529 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:28:04.084965 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:28:04.086523 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:28:04.089884 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:28:04.090190 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:28:04.091105 | orchestrator | 2025-05-19 19:28:04.092794 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-19 19:28:04.093703 | orchestrator | Monday 19 May 2025 19:28:04 +0000 (0:00:01.488) 0:00:18.546 ************ 2025-05-19 19:28:05.594694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:28:05.594862 | orchestrator | 2025-05-19 19:28:05.595113 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-19 19:28:05.595650 | orchestrator | Monday 19 May 2025 19:28:05 +0000 (0:00:01.512) 0:00:20.059 ************ 2025-05-19 19:28:06.558568 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:06.558765 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:28:06.560551 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:28:06.561431 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:28:06.564509 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:28:06.564532 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:28:06.564544 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:28:06.564846 | orchestrator | 2025-05-19 19:28:06.565419 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-19 19:28:06.565735 | orchestrator | Monday 19 May 2025 19:28:06 +0000 (0:00:00.965) 0:00:21.025 ************ 2025-05-19 19:28:06.727680 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:06.804129 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:28:07.042964 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:28:07.126890 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:28:07.209496 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:28:07.353687 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:28:07.357916 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:28:07.357952 | orchestrator | 2025-05-19 19:28:07.357968 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-19 19:28:07.357982 | orchestrator | Monday 19 May 2025 19:28:07 +0000 (0:00:00.792) 0:00:21.817 ************ 2025-05-19 19:28:07.726338 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:07.726569 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:07.811467 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:07.922910 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:07.923019 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:07.923091 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:08.400293 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:08.401679 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:08.403060 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:08.404232 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:08.404992 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:08.406237 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:08.407227 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 19:28:08.408247 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 19:28:08.408432 | orchestrator | 2025-05-19 19:28:08.408828 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-19 19:28:08.409381 | orchestrator | Monday 19 May 2025 19:28:08 +0000 (0:00:01.048) 0:00:22.865 ************ 2025-05-19 19:28:08.721547 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:28:08.805716 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:28:08.887095 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:28:08.973283 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:28:09.055286 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:28:10.365419 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:28:10.366514 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:28:10.371653 | orchestrator | 2025-05-19 19:28:10.372417 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-19 19:28:10.373256 | orchestrator | Monday 19 May 2025 19:28:10 +0000 (0:00:01.963) 0:00:24.829 ************ 2025-05-19 19:28:10.524709 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:28:10.613952 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:28:10.859898 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:28:10.956416 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:28:11.040908 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:28:11.084272 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:28:11.084449 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:28:11.085049 | orchestrator | 2025-05-19 19:28:11.085485 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:28:11.085694 | orchestrator | 2025-05-19 19:28:11 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:28:11.085725 | orchestrator | 2025-05-19 19:28:11 | INFO  | Please wait and do not abort execution. 2025-05-19 19:28:11.086651 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.086726 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.087614 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.088011 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.088365 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.089023 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.089702 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:28:11.089987 | orchestrator | 2025-05-19 19:28:11.090824 | orchestrator | Monday 19 May 2025 19:28:11 +0000 (0:00:00.723) 0:00:25.553 ************ 2025-05-19 19:28:11.091241 | orchestrator | =============================================================================== 2025-05-19 19:28:11.092393 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.08s 2025-05-19 19:28:11.093501 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.96s 2025-05-19 19:28:11.093984 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.90s 2025-05-19 19:28:11.094809 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.76s 2025-05-19 19:28:11.095434 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.73s 2025-05-19 19:28:11.096181 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.72s 2025-05-19 19:28:11.096947 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.65s 2025-05-19 19:28:11.097620 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.51s 2025-05-19 19:28:11.098114 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.49s 2025-05-19 19:28:11.098678 | orchestrator | osism.commons.network : Create required directories --------------------- 1.20s 2025-05-19 19:28:11.099391 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2025-05-19 19:28:11.099900 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-05-19 19:28:11.100415 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.05s 2025-05-19 19:28:11.100987 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.05s 2025-05-19 19:28:11.101425 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-05-19 19:28:11.101959 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.81s 2025-05-19 19:28:11.102495 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.79s 2025-05-19 19:28:11.103509 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.72s 2025-05-19 19:28:11.103601 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.69s 2025-05-19 19:28:11.671645 | orchestrator | + osism apply wireguard 2025-05-19 19:28:13.121536 | orchestrator | 2025-05-19 19:28:13 | INFO  | Task c4703c70-7687-46bb-97c7-5ae1c7d1df28 (wireguard) was prepared for execution. 2025-05-19 19:28:13.121644 | orchestrator | 2025-05-19 19:28:13 | INFO  | It takes a moment until task c4703c70-7687-46bb-97c7-5ae1c7d1df28 (wireguard) has been started and output is visible here. 2025-05-19 19:28:16.227381 | orchestrator | 2025-05-19 19:28:16.228042 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-19 19:28:16.230941 | orchestrator | 2025-05-19 19:28:16.231085 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-19 19:28:16.231605 | orchestrator | Monday 19 May 2025 19:28:16 +0000 (0:00:00.165) 0:00:00.165 ************ 2025-05-19 19:28:17.666915 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:17.667691 | orchestrator | 2025-05-19 19:28:17.668762 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-19 19:28:17.669781 | orchestrator | Monday 19 May 2025 19:28:17 +0000 (0:00:01.441) 0:00:01.607 ************ 2025-05-19 19:28:23.902642 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:23.905031 | orchestrator | 2025-05-19 19:28:23.905099 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-19 19:28:23.905929 | orchestrator | Monday 19 May 2025 19:28:23 +0000 (0:00:06.233) 0:00:07.841 ************ 2025-05-19 19:28:24.426640 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:24.426818 | orchestrator | 2025-05-19 19:28:24.427134 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-19 19:28:24.427728 | orchestrator | Monday 19 May 2025 19:28:24 +0000 (0:00:00.527) 0:00:08.368 ************ 2025-05-19 19:28:24.830282 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:24.830398 | orchestrator | 2025-05-19 19:28:24.830483 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-19 19:28:24.830577 | orchestrator | Monday 19 May 2025 19:28:24 +0000 (0:00:00.403) 0:00:08.772 ************ 2025-05-19 19:28:25.357570 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:25.357730 | orchestrator | 2025-05-19 19:28:25.358231 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-19 19:28:25.360002 | orchestrator | Monday 19 May 2025 19:28:25 +0000 (0:00:00.525) 0:00:09.297 ************ 2025-05-19 19:28:25.872071 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:25.872448 | orchestrator | 2025-05-19 19:28:25.872910 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-19 19:28:25.873619 | orchestrator | Monday 19 May 2025 19:28:25 +0000 (0:00:00.516) 0:00:09.814 ************ 2025-05-19 19:28:26.279505 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:26.280962 | orchestrator | 2025-05-19 19:28:26.281373 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-19 19:28:26.281788 | orchestrator | Monday 19 May 2025 19:28:26 +0000 (0:00:00.404) 0:00:10.218 ************ 2025-05-19 19:28:27.426347 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:27.427759 | orchestrator | 2025-05-19 19:28:27.427810 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-19 19:28:27.427890 | orchestrator | Monday 19 May 2025 19:28:27 +0000 (0:00:01.147) 0:00:11.366 ************ 2025-05-19 19:28:28.266529 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 19:28:28.267281 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:28.267927 | orchestrator | 2025-05-19 19:28:28.268688 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-19 19:28:28.269536 | orchestrator | Monday 19 May 2025 19:28:28 +0000 (0:00:00.838) 0:00:12.205 ************ 2025-05-19 19:28:29.917294 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:29.917417 | orchestrator | 2025-05-19 19:28:29.918980 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-19 19:28:29.919009 | orchestrator | Monday 19 May 2025 19:28:29 +0000 (0:00:01.651) 0:00:13.856 ************ 2025-05-19 19:28:30.859949 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:30.860631 | orchestrator | 2025-05-19 19:28:30.862189 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:28:30.862468 | orchestrator | 2025-05-19 19:28:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:28:30.862753 | orchestrator | 2025-05-19 19:28:30 | INFO  | Please wait and do not abort execution. 2025-05-19 19:28:30.863477 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:28:30.864536 | orchestrator | 2025-05-19 19:28:30.865180 | orchestrator | Monday 19 May 2025 19:28:30 +0000 (0:00:00.942) 0:00:14.799 ************ 2025-05-19 19:28:30.866563 | orchestrator | =============================================================================== 2025-05-19 19:28:30.866662 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.23s 2025-05-19 19:28:30.867052 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-05-19 19:28:30.867495 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-05-19 19:28:30.867935 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.15s 2025-05-19 19:28:30.868466 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-05-19 19:28:30.869022 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.84s 2025-05-19 19:28:30.869415 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.53s 2025-05-19 19:28:30.869837 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-05-19 19:28:30.870670 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-05-19 19:28:30.871340 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-05-19 19:28:30.871653 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-05-19 19:28:31.371157 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-19 19:28:31.409460 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-19 19:28:31.409533 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-19 19:28:31.499504 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 155 0 --:--:-- --:--:-- --:--:-- 155 2025-05-19 19:28:31.511735 | orchestrator | + osism apply --environment custom workarounds 2025-05-19 19:28:32.849632 | orchestrator | 2025-05-19 19:28:32 | INFO  | Trying to run play workarounds in environment custom 2025-05-19 19:28:32.903171 | orchestrator | 2025-05-19 19:28:32 | INFO  | Task eb733de6-83cf-4cbf-891e-efa8d40d836e (workarounds) was prepared for execution. 2025-05-19 19:28:32.904301 | orchestrator | 2025-05-19 19:28:32 | INFO  | It takes a moment until task eb733de6-83cf-4cbf-891e-efa8d40d836e (workarounds) has been started and output is visible here. 2025-05-19 19:28:35.946336 | orchestrator | 2025-05-19 19:28:35.949549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:28:35.950109 | orchestrator | 2025-05-19 19:28:35.950144 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-19 19:28:35.950157 | orchestrator | Monday 19 May 2025 19:28:35 +0000 (0:00:00.137) 0:00:00.137 ************ 2025-05-19 19:28:36.109234 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-19 19:28:36.191378 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-19 19:28:36.272561 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-19 19:28:36.355537 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-19 19:28:36.435543 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-19 19:28:36.692344 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-19 19:28:36.693039 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-19 19:28:36.693622 | orchestrator | 2025-05-19 19:28:36.694120 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-19 19:28:36.694695 | orchestrator | 2025-05-19 19:28:36.696037 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-19 19:28:36.696060 | orchestrator | Monday 19 May 2025 19:28:36 +0000 (0:00:00.749) 0:00:00.887 ************ 2025-05-19 19:28:39.211564 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:39.211669 | orchestrator | 2025-05-19 19:28:39.211685 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-19 19:28:39.214312 | orchestrator | 2025-05-19 19:28:39.214347 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-19 19:28:39.214825 | orchestrator | Monday 19 May 2025 19:28:39 +0000 (0:00:02.512) 0:00:03.399 ************ 2025-05-19 19:28:41.005718 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:28:41.005820 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:28:41.006593 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:28:41.007657 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:28:41.007958 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:28:41.008664 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:28:41.009275 | orchestrator | 2025-05-19 19:28:41.010090 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-19 19:28:41.010648 | orchestrator | 2025-05-19 19:28:41.010933 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-19 19:28:41.011669 | orchestrator | Monday 19 May 2025 19:28:40 +0000 (0:00:01.798) 0:00:05.198 ************ 2025-05-19 19:28:42.530422 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 19:28:42.530539 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 19:28:42.530651 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 19:28:42.530885 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 19:28:42.531392 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 19:28:42.531707 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 19:28:42.531988 | orchestrator | 2025-05-19 19:28:42.532401 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-19 19:28:42.534236 | orchestrator | Monday 19 May 2025 19:28:42 +0000 (0:00:01.522) 0:00:06.720 ************ 2025-05-19 19:28:46.301723 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:28:46.302091 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:28:46.303314 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:28:46.304311 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:28:46.305624 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:28:46.307180 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:28:46.307739 | orchestrator | 2025-05-19 19:28:46.308283 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-19 19:28:46.309592 | orchestrator | Monday 19 May 2025 19:28:46 +0000 (0:00:03.773) 0:00:10.494 ************ 2025-05-19 19:28:46.442930 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:28:46.520665 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:28:46.617546 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:28:46.825409 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:28:46.962820 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:28:46.963775 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:28:46.965149 | orchestrator | 2025-05-19 19:28:46.965893 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-19 19:28:46.966998 | orchestrator | 2025-05-19 19:28:46.967819 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-19 19:28:46.968551 | orchestrator | Monday 19 May 2025 19:28:46 +0000 (0:00:00.660) 0:00:11.154 ************ 2025-05-19 19:28:48.529879 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:48.534176 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:28:48.535969 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:28:48.536727 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:28:48.537706 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:28:48.538597 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:28:48.539525 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:28:48.540006 | orchestrator | 2025-05-19 19:28:48.540724 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-19 19:28:48.541511 | orchestrator | Monday 19 May 2025 19:28:48 +0000 (0:00:01.567) 0:00:12.722 ************ 2025-05-19 19:28:50.135609 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:50.136139 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:28:50.139577 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:28:50.139617 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:28:50.139629 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:28:50.139691 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:28:50.141277 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:28:50.141809 | orchestrator | 2025-05-19 19:28:50.142432 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-19 19:28:50.142812 | orchestrator | Monday 19 May 2025 19:28:50 +0000 (0:00:01.601) 0:00:14.323 ************ 2025-05-19 19:28:51.575648 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:28:51.575871 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:28:51.576849 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:51.578589 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:28:51.579943 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:28:51.580473 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:28:51.581555 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:28:51.582539 | orchestrator | 2025-05-19 19:28:51.583873 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-19 19:28:51.584420 | orchestrator | Monday 19 May 2025 19:28:51 +0000 (0:00:01.444) 0:00:15.768 ************ 2025-05-19 19:28:53.233324 | orchestrator | changed: [testbed-manager] 2025-05-19 19:28:53.234739 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:28:53.235653 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:28:53.237289 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:28:53.237729 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:28:53.239020 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:28:53.239930 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:28:53.240818 | orchestrator | 2025-05-19 19:28:53.243619 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-19 19:28:53.243824 | orchestrator | Monday 19 May 2025 19:28:53 +0000 (0:00:01.656) 0:00:17.424 ************ 2025-05-19 19:28:53.377814 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:28:53.451715 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:28:53.525500 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:28:53.595034 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:28:53.809785 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:28:53.954769 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:28:53.957001 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:28:53.957326 | orchestrator | 2025-05-19 19:28:53.958367 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-19 19:28:53.958938 | orchestrator | 2025-05-19 19:28:53.959755 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-19 19:28:53.960569 | orchestrator | Monday 19 May 2025 19:28:53 +0000 (0:00:00.721) 0:00:18.146 ************ 2025-05-19 19:28:56.338401 | orchestrator | ok: [testbed-manager] 2025-05-19 19:28:56.338625 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:28:56.338927 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:28:56.339539 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:28:56.340368 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:28:56.341763 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:28:56.342322 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:28:56.343334 | orchestrator | 2025-05-19 19:28:56.344242 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:28:56.344299 | orchestrator | 2025-05-19 19:28:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:28:56.344316 | orchestrator | 2025-05-19 19:28:56 | INFO  | Please wait and do not abort execution. 2025-05-19 19:28:56.344383 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:28:56.346126 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:28:56.347040 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:28:56.347910 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:28:56.348738 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:28:56.349543 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:28:56.349567 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:28:56.350173 | orchestrator | 2025-05-19 19:28:56.350618 | orchestrator | Monday 19 May 2025 19:28:56 +0000 (0:00:02.384) 0:00:20.530 ************ 2025-05-19 19:28:56.351142 | orchestrator | =============================================================================== 2025-05-19 19:28:56.351662 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.77s 2025-05-19 19:28:56.352138 | orchestrator | Apply netplan configuration --------------------------------------------- 2.51s 2025-05-19 19:28:56.352498 | orchestrator | Install python3-docker -------------------------------------------------- 2.38s 2025-05-19 19:28:56.353033 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-05-19 19:28:56.353749 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.66s 2025-05-19 19:28:56.354176 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2025-05-19 19:28:56.354489 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.57s 2025-05-19 19:28:56.354753 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.52s 2025-05-19 19:28:56.355238 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.44s 2025-05-19 19:28:56.355463 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2025-05-19 19:28:56.355844 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.72s 2025-05-19 19:28:56.356034 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.66s 2025-05-19 19:28:56.830469 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-19 19:28:58.235125 | orchestrator | 2025-05-19 19:28:58 | INFO  | Task 3b4ddfbe-5c9d-49fd-b2a8-c30f71bb5344 (reboot) was prepared for execution. 2025-05-19 19:28:58.235240 | orchestrator | 2025-05-19 19:28:58 | INFO  | It takes a moment until task 3b4ddfbe-5c9d-49fd-b2a8-c30f71bb5344 (reboot) has been started and output is visible here. 2025-05-19 19:29:01.219382 | orchestrator | 2025-05-19 19:29:01.219556 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 19:29:01.221517 | orchestrator | 2025-05-19 19:29:01.222611 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 19:29:01.223422 | orchestrator | Monday 19 May 2025 19:29:01 +0000 (0:00:00.142) 0:00:00.142 ************ 2025-05-19 19:29:01.307950 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:29:01.308800 | orchestrator | 2025-05-19 19:29:01.310105 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 19:29:01.310142 | orchestrator | Monday 19 May 2025 19:29:01 +0000 (0:00:00.091) 0:00:00.233 ************ 2025-05-19 19:29:02.201116 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:29:02.202227 | orchestrator | 2025-05-19 19:29:02.203463 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 19:29:02.204164 | orchestrator | Monday 19 May 2025 19:29:02 +0000 (0:00:00.892) 0:00:01.126 ************ 2025-05-19 19:29:02.312471 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:29:02.312550 | orchestrator | 2025-05-19 19:29:02.312810 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 19:29:02.313656 | orchestrator | 2025-05-19 19:29:02.313679 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 19:29:02.313971 | orchestrator | Monday 19 May 2025 19:29:02 +0000 (0:00:00.111) 0:00:01.238 ************ 2025-05-19 19:29:02.406502 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:29:02.406808 | orchestrator | 2025-05-19 19:29:02.407519 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 19:29:02.408253 | orchestrator | Monday 19 May 2025 19:29:02 +0000 (0:00:00.095) 0:00:01.333 ************ 2025-05-19 19:29:03.035484 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:29:03.036413 | orchestrator | 2025-05-19 19:29:03.036789 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 19:29:03.037723 | orchestrator | Monday 19 May 2025 19:29:03 +0000 (0:00:00.626) 0:00:01.959 ************ 2025-05-19 19:29:03.140014 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:29:03.140155 | orchestrator | 2025-05-19 19:29:03.140484 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 19:29:03.141273 | orchestrator | 2025-05-19 19:29:03.141954 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 19:29:03.142622 | orchestrator | Monday 19 May 2025 19:29:03 +0000 (0:00:00.100) 0:00:02.060 ************ 2025-05-19 19:29:03.239338 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:29:03.239743 | orchestrator | 2025-05-19 19:29:03.240690 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 19:29:03.241494 | orchestrator | Monday 19 May 2025 19:29:03 +0000 (0:00:00.101) 0:00:02.162 ************ 2025-05-19 19:29:03.970251 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:29:03.970380 | orchestrator | 2025-05-19 19:29:03.970613 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 19:29:03.971547 | orchestrator | Monday 19 May 2025 19:29:03 +0000 (0:00:00.733) 0:00:02.895 ************ 2025-05-19 19:29:04.079115 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:29:04.082540 | orchestrator | 2025-05-19 19:29:04.083927 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 19:29:04.084649 | orchestrator | 2025-05-19 19:29:04.085474 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 19:29:04.086756 | orchestrator | Monday 19 May 2025 19:29:04 +0000 (0:00:00.107) 0:00:03.003 ************ 2025-05-19 19:29:04.179374 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:29:04.179949 | orchestrator | 2025-05-19 19:29:04.180962 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 19:29:04.183825 | orchestrator | Monday 19 May 2025 19:29:04 +0000 (0:00:00.101) 0:00:03.105 ************ 2025-05-19 19:29:04.850712 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:29:04.850832 | orchestrator | 2025-05-19 19:29:04.850995 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 19:29:04.851113 | orchestrator | Monday 19 May 2025 19:29:04 +0000 (0:00:00.669) 0:00:03.774 ************ 2025-05-19 19:29:04.946840 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:29:04.946928 | orchestrator | 2025-05-19 19:29:04.946975 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 19:29:04.947553 | orchestrator | 2025-05-19 19:29:04.949086 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 19:29:04.949301 | orchestrator | Monday 19 May 2025 19:29:04 +0000 (0:00:00.095) 0:00:03.870 ************ 2025-05-19 19:29:05.043092 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:29:05.043249 | orchestrator | 2025-05-19 19:29:05.043269 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 19:29:05.043409 | orchestrator | Monday 19 May 2025 19:29:05 +0000 (0:00:00.098) 0:00:03.968 ************ 2025-05-19 19:29:05.683870 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:29:05.686612 | orchestrator | 2025-05-19 19:29:05.687086 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 19:29:05.687441 | orchestrator | Monday 19 May 2025 19:29:05 +0000 (0:00:00.639) 0:00:04.608 ************ 2025-05-19 19:29:05.785636 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:29:05.787579 | orchestrator | 2025-05-19 19:29:05.788843 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 19:29:05.789524 | orchestrator | 2025-05-19 19:29:05.790161 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 19:29:05.790836 | orchestrator | Monday 19 May 2025 19:29:05 +0000 (0:00:00.100) 0:00:04.709 ************ 2025-05-19 19:29:05.878802 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:29:05.878921 | orchestrator | 2025-05-19 19:29:05.878946 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 19:29:05.878961 | orchestrator | Monday 19 May 2025 19:29:05 +0000 (0:00:00.093) 0:00:04.803 ************ 2025-05-19 19:29:06.579638 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:29:06.580071 | orchestrator | 2025-05-19 19:29:06.580863 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 19:29:06.581339 | orchestrator | Monday 19 May 2025 19:29:06 +0000 (0:00:00.701) 0:00:05.504 ************ 2025-05-19 19:29:06.610554 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:29:06.611151 | orchestrator | 2025-05-19 19:29:06.612136 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:29:06.612677 | orchestrator | 2025-05-19 19:29:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:29:06.612737 | orchestrator | 2025-05-19 19:29:06 | INFO  | Please wait and do not abort execution. 2025-05-19 19:29:06.614141 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:29:06.615067 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:29:06.615353 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:29:06.615840 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:29:06.616801 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:29:06.617135 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:29:06.617552 | orchestrator | 2025-05-19 19:29:06.617900 | orchestrator | Monday 19 May 2025 19:29:06 +0000 (0:00:00.032) 0:00:05.537 ************ 2025-05-19 19:29:06.618333 | orchestrator | =============================================================================== 2025-05-19 19:29:06.618754 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2025-05-19 19:29:06.619099 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2025-05-19 19:29:06.619463 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2025-05-19 19:29:07.052426 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-19 19:29:08.423486 | orchestrator | 2025-05-19 19:29:08 | INFO  | Task f6a4a2cf-5347-4a77-a819-768d8b01ef07 (wait-for-connection) was prepared for execution. 2025-05-19 19:29:08.423602 | orchestrator | 2025-05-19 19:29:08 | INFO  | It takes a moment until task f6a4a2cf-5347-4a77-a819-768d8b01ef07 (wait-for-connection) has been started and output is visible here. 2025-05-19 19:29:11.386382 | orchestrator | 2025-05-19 19:29:11.390562 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-19 19:29:11.393064 | orchestrator | 2025-05-19 19:29:11.393929 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-19 19:29:11.394333 | orchestrator | Monday 19 May 2025 19:29:11 +0000 (0:00:00.159) 0:00:00.159 ************ 2025-05-19 19:29:24.521155 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:29:24.521333 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:29:24.521349 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:29:24.521361 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:29:24.522370 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:29:24.522854 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:29:24.524894 | orchestrator | 2025-05-19 19:29:24.525497 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:29:24.525900 | orchestrator | 2025-05-19 19:29:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:29:24.526307 | orchestrator | 2025-05-19 19:29:24 | INFO  | Please wait and do not abort execution. 2025-05-19 19:29:24.526868 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:24.528282 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:24.529271 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:24.529490 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:24.529821 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:24.530404 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:24.531399 | orchestrator | 2025-05-19 19:29:24.532092 | orchestrator | Monday 19 May 2025 19:29:24 +0000 (0:00:13.135) 0:00:13.294 ************ 2025-05-19 19:29:24.532953 | orchestrator | =============================================================================== 2025-05-19 19:29:24.533490 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.14s 2025-05-19 19:29:24.952294 | orchestrator | + osism apply hddtemp 2025-05-19 19:29:26.317315 | orchestrator | 2025-05-19 19:29:26 | INFO  | Task 63c2b540-6837-4ca4-8661-6e4b777dd585 (hddtemp) was prepared for execution. 2025-05-19 19:29:26.317434 | orchestrator | 2025-05-19 19:29:26 | INFO  | It takes a moment until task 63c2b540-6837-4ca4-8661-6e4b777dd585 (hddtemp) has been started and output is visible here. 2025-05-19 19:29:29.337205 | orchestrator | 2025-05-19 19:29:29.338115 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-19 19:29:29.341706 | orchestrator | 2025-05-19 19:29:29.341732 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-19 19:29:29.341744 | orchestrator | Monday 19 May 2025 19:29:29 +0000 (0:00:00.191) 0:00:00.191 ************ 2025-05-19 19:29:29.479868 | orchestrator | ok: [testbed-manager] 2025-05-19 19:29:29.552326 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:29:29.624521 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:29:29.693979 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:29:29.776254 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:29:29.972705 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:29:29.972911 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:29:29.973932 | orchestrator | 2025-05-19 19:29:29.974640 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-19 19:29:29.981133 | orchestrator | Monday 19 May 2025 19:29:29 +0000 (0:00:00.635) 0:00:00.827 ************ 2025-05-19 19:29:31.083686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:29:31.083929 | orchestrator | 2025-05-19 19:29:31.084874 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-19 19:29:31.086340 | orchestrator | Monday 19 May 2025 19:29:31 +0000 (0:00:01.109) 0:00:01.936 ************ 2025-05-19 19:29:32.929690 | orchestrator | ok: [testbed-manager] 2025-05-19 19:29:32.930779 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:29:32.932949 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:29:32.934255 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:29:32.936519 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:29:32.936867 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:29:32.937683 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:29:32.938415 | orchestrator | 2025-05-19 19:29:32.939378 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-19 19:29:32.940146 | orchestrator | Monday 19 May 2025 19:29:32 +0000 (0:00:01.848) 0:00:03.785 ************ 2025-05-19 19:29:33.503280 | orchestrator | changed: [testbed-manager] 2025-05-19 19:29:33.589095 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:29:34.029119 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:29:34.029496 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:29:34.031313 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:29:34.032317 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:29:34.033570 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:29:34.034573 | orchestrator | 2025-05-19 19:29:34.035209 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-19 19:29:34.036010 | orchestrator | Monday 19 May 2025 19:29:34 +0000 (0:00:01.094) 0:00:04.879 ************ 2025-05-19 19:29:35.485015 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:29:35.486416 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:29:35.488125 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:29:35.489550 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:29:35.490598 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:29:35.491523 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:29:35.492571 | orchestrator | ok: [testbed-manager] 2025-05-19 19:29:35.493527 | orchestrator | 2025-05-19 19:29:35.493864 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-19 19:29:35.494581 | orchestrator | Monday 19 May 2025 19:29:35 +0000 (0:00:01.456) 0:00:06.336 ************ 2025-05-19 19:29:35.732021 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:29:35.814299 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:29:35.898343 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:29:35.970772 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:29:36.083022 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:29:36.084218 | orchestrator | changed: [testbed-manager] 2025-05-19 19:29:36.085227 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:29:36.086305 | orchestrator | 2025-05-19 19:29:36.087449 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-19 19:29:36.088599 | orchestrator | Monday 19 May 2025 19:29:36 +0000 (0:00:00.602) 0:00:06.938 ************ 2025-05-19 19:29:48.728152 | orchestrator | changed: [testbed-manager] 2025-05-19 19:29:48.728305 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:29:48.729416 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:29:48.729728 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:29:48.730567 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:29:48.731616 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:29:48.731779 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:29:48.732495 | orchestrator | 2025-05-19 19:29:48.733030 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-19 19:29:48.734386 | orchestrator | Monday 19 May 2025 19:29:48 +0000 (0:00:12.636) 0:00:19.575 ************ 2025-05-19 19:29:49.898944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:29:49.899070 | orchestrator | 2025-05-19 19:29:49.899631 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-19 19:29:49.900343 | orchestrator | Monday 19 May 2025 19:29:49 +0000 (0:00:01.175) 0:00:20.750 ************ 2025-05-19 19:29:51.697981 | orchestrator | changed: [testbed-manager] 2025-05-19 19:29:51.698520 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:29:51.699460 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:29:51.700882 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:29:51.701608 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:29:51.702316 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:29:51.704086 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:29:51.704363 | orchestrator | 2025-05-19 19:29:51.704928 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:29:51.705551 | orchestrator | 2025-05-19 19:29:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:29:51.705785 | orchestrator | 2025-05-19 19:29:51 | INFO  | Please wait and do not abort execution. 2025-05-19 19:29:51.706906 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:29:51.707772 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:29:51.708139 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:29:51.708855 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:29:51.709236 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:29:51.709702 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:29:51.710366 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:29:51.710774 | orchestrator | 2025-05-19 19:29:51.711224 | orchestrator | Monday 19 May 2025 19:29:51 +0000 (0:00:01.802) 0:00:22.553 ************ 2025-05-19 19:29:51.711622 | orchestrator | =============================================================================== 2025-05-19 19:29:51.712340 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.64s 2025-05-19 19:29:51.712836 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.85s 2025-05-19 19:29:51.713769 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.80s 2025-05-19 19:29:51.714468 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.46s 2025-05-19 19:29:51.714890 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2025-05-19 19:29:51.715923 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.11s 2025-05-19 19:29:51.716517 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.09s 2025-05-19 19:29:51.717497 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.64s 2025-05-19 19:29:51.717974 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.60s 2025-05-19 19:29:52.233312 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-19 19:29:53.972618 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 19:29:53.972732 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-19 19:29:53.972750 | orchestrator | + local max_attempts=60 2025-05-19 19:29:53.972764 | orchestrator | + local name=ceph-ansible 2025-05-19 19:29:53.972776 | orchestrator | + local attempt_num=1 2025-05-19 19:29:53.973066 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-19 19:29:54.010698 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 19:29:54.010778 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-19 19:29:54.010792 | orchestrator | + local max_attempts=60 2025-05-19 19:29:54.010805 | orchestrator | + local name=kolla-ansible 2025-05-19 19:29:54.010817 | orchestrator | + local attempt_num=1 2025-05-19 19:29:54.011545 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-19 19:29:54.040730 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 19:29:54.040796 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-19 19:29:54.040810 | orchestrator | + local max_attempts=60 2025-05-19 19:29:54.040823 | orchestrator | + local name=osism-ansible 2025-05-19 19:29:54.040834 | orchestrator | + local attempt_num=1 2025-05-19 19:29:54.041515 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-19 19:29:54.070733 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 19:29:54.070799 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-19 19:29:54.070813 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-19 19:29:54.225624 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-19 19:29:54.364713 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-19 19:29:54.527869 | orchestrator | ARA in osism-ansible already disabled. 2025-05-19 19:29:54.697795 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-19 19:29:54.698004 | orchestrator | + osism apply gather-facts 2025-05-19 19:29:56.097650 | orchestrator | 2025-05-19 19:29:56 | INFO  | Task ca7699bd-e7db-449b-b208-d4bb26a33649 (gather-facts) was prepared for execution. 2025-05-19 19:29:56.097753 | orchestrator | 2025-05-19 19:29:56 | INFO  | It takes a moment until task ca7699bd-e7db-449b-b208-d4bb26a33649 (gather-facts) has been started and output is visible here. 2025-05-19 19:29:59.083499 | orchestrator | 2025-05-19 19:29:59.083611 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 19:29:59.084098 | orchestrator | 2025-05-19 19:29:59.084903 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 19:29:59.085734 | orchestrator | Monday 19 May 2025 19:29:59 +0000 (0:00:00.157) 0:00:00.157 ************ 2025-05-19 19:30:04.018766 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:30:04.019608 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:30:04.020929 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:30:04.022336 | orchestrator | ok: [testbed-manager] 2025-05-19 19:30:04.023742 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:30:04.024495 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:30:04.025475 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:30:04.026977 | orchestrator | 2025-05-19 19:30:04.028429 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 19:30:04.028820 | orchestrator | 2025-05-19 19:30:04.029581 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 19:30:04.030305 | orchestrator | Monday 19 May 2025 19:30:04 +0000 (0:00:04.939) 0:00:05.096 ************ 2025-05-19 19:30:04.179434 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:30:04.256900 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:30:04.349335 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:30:04.431121 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:30:04.508722 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:30:04.551389 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:30:04.551550 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:30:04.551652 | orchestrator | 2025-05-19 19:30:04.553377 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:30:04.553423 | orchestrator | 2025-05-19 19:30:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:30:04.553439 | orchestrator | 2025-05-19 19:30:04 | INFO  | Please wait and do not abort execution. 2025-05-19 19:30:04.554556 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.555805 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.556414 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.557517 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.557911 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.558898 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.559363 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:30:04.560085 | orchestrator | 2025-05-19 19:30:04.560464 | orchestrator | Monday 19 May 2025 19:30:04 +0000 (0:00:00.531) 0:00:05.628 ************ 2025-05-19 19:30:04.560890 | orchestrator | =============================================================================== 2025-05-19 19:30:04.561578 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-05-19 19:30:04.561825 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-05-19 19:30:05.054855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-19 19:30:05.072919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-19 19:30:05.086743 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-19 19:30:05.095948 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-19 19:30:05.107069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-19 19:30:05.128975 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-19 19:30:05.144382 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-19 19:30:05.160034 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-19 19:30:05.181964 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-19 19:30:05.198927 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-19 19:30:05.211831 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-19 19:30:05.224092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-19 19:30:05.238395 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-19 19:30:05.251343 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-19 19:30:05.269676 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-19 19:30:05.287273 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-19 19:30:05.307609 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-19 19:30:05.323928 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-19 19:30:05.339892 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-19 19:30:05.358108 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-19 19:30:05.376834 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-19 19:30:05.480102 | orchestrator | ok: Runtime: 0:25:56.975666 2025-05-19 19:30:05.585423 | 2025-05-19 19:30:05.585566 | TASK [Deploy services] 2025-05-19 19:30:06.125248 | orchestrator | skipping: Conditional result was False 2025-05-19 19:30:06.147692 | 2025-05-19 19:30:06.147887 | TASK [Deploy in a nutshell] 2025-05-19 19:30:06.870987 | orchestrator | + set -e 2025-05-19 19:30:06.871225 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 19:30:06.871252 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 19:30:06.871274 | orchestrator | ++ INTERACTIVE=false 2025-05-19 19:30:06.871288 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 19:30:06.871301 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 19:30:06.871330 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 19:30:06.871375 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 19:30:06.871405 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 19:30:06.871419 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 19:30:06.871436 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 19:30:06.871448 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 19:30:06.871466 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 19:30:06.871477 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-19 19:30:06.871497 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-19 19:30:06.871508 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 19:30:06.871523 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 19:30:06.871534 | orchestrator | ++ export ARA=false 2025-05-19 19:30:06.871546 | orchestrator | ++ ARA=false 2025-05-19 19:30:06.871562 | orchestrator | ++ export TEMPEST=false 2025-05-19 19:30:06.871574 | orchestrator | ++ TEMPEST=false 2025-05-19 19:30:06.871585 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 19:30:06.871596 | orchestrator | ++ IS_ZUUL=true 2025-05-19 19:30:06.871610 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:30:06.871629 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.40 2025-05-19 19:30:06.871648 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 19:30:06.871665 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 19:30:06.871685 | orchestrator | 2025-05-19 19:30:06.871704 | orchestrator | # PULL IMAGES 2025-05-19 19:30:06.871721 | orchestrator | 2025-05-19 19:30:06.871732 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 19:30:06.871743 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 19:30:06.871754 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 19:30:06.871766 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 19:30:06.871777 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 19:30:06.871787 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 19:30:06.871799 | orchestrator | + echo 2025-05-19 19:30:06.871810 | orchestrator | + echo '# PULL IMAGES' 2025-05-19 19:30:06.871821 | orchestrator | + echo 2025-05-19 19:30:06.872780 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-19 19:30:06.931238 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-19 19:30:06.931330 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-19 19:30:08.294481 | orchestrator | 2025-05-19 19:30:08 | INFO  | Trying to run play pull-images in environment custom 2025-05-19 19:30:08.341390 | orchestrator | 2025-05-19 19:30:08 | INFO  | Task 60c7af90-2286-4077-9c70-e3ace682bbe4 (pull-images) was prepared for execution. 2025-05-19 19:30:08.341495 | orchestrator | 2025-05-19 19:30:08 | INFO  | It takes a moment until task 60c7af90-2286-4077-9c70-e3ace682bbe4 (pull-images) has been started and output is visible here. 2025-05-19 19:30:11.334564 | orchestrator | 2025-05-19 19:30:11.334866 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-19 19:30:11.336718 | orchestrator | 2025-05-19 19:30:11.338429 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-19 19:30:11.338842 | orchestrator | Monday 19 May 2025 19:30:11 +0000 (0:00:00.136) 0:00:00.136 ************ 2025-05-19 19:30:48.825692 | orchestrator | changed: [testbed-manager] 2025-05-19 19:30:48.825825 | orchestrator | 2025-05-19 19:30:48.825855 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-19 19:30:48.825870 | orchestrator | Monday 19 May 2025 19:30:48 +0000 (0:00:37.487) 0:00:37.624 ************ 2025-05-19 19:31:36.801474 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-19 19:31:36.803140 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-19 19:31:36.803233 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-19 19:31:36.803254 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-19 19:31:36.803272 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-19 19:31:36.803290 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-19 19:31:36.803310 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-19 19:31:36.803328 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-19 19:31:36.803384 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-19 19:31:36.803551 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-19 19:31:36.805296 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-19 19:31:36.805655 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-19 19:31:36.806080 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-19 19:31:36.806548 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-19 19:31:36.806884 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-19 19:31:36.808314 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-19 19:31:36.808350 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-19 19:31:36.808361 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-19 19:31:36.808371 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-19 19:31:36.808381 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-19 19:31:36.808511 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-19 19:31:36.808924 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-19 19:31:36.809335 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-19 19:31:36.809651 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-19 19:31:36.810010 | orchestrator | 2025-05-19 19:31:36.810831 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:31:36.810849 | orchestrator | 2025-05-19 19:31:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:31:36.810860 | orchestrator | 2025-05-19 19:31:36 | INFO  | Please wait and do not abort execution. 2025-05-19 19:31:36.812647 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:31:36.812669 | orchestrator | 2025-05-19 19:31:36.812681 | orchestrator | Monday 19 May 2025 19:31:36 +0000 (0:00:47.979) 0:01:25.603 ************ 2025-05-19 19:31:36.812693 | orchestrator | =============================================================================== 2025-05-19 19:31:36.812705 | orchestrator | Pull other images ------------------------------------------------------ 47.98s 2025-05-19 19:31:36.812716 | orchestrator | Pull keystone image ---------------------------------------------------- 37.49s 2025-05-19 19:31:38.830653 | orchestrator | 2025-05-19 19:31:38 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-19 19:31:38.897033 | orchestrator | 2025-05-19 19:31:38 | INFO  | Task 4c025320-5cc0-45d7-8470-904aeafdd619 (wipe-partitions) was prepared for execution. 2025-05-19 19:31:38.897138 | orchestrator | 2025-05-19 19:31:38 | INFO  | It takes a moment until task 4c025320-5cc0-45d7-8470-904aeafdd619 (wipe-partitions) has been started and output is visible here. 2025-05-19 19:31:41.884217 | orchestrator | 2025-05-19 19:31:41.884333 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-19 19:31:41.884348 | orchestrator | 2025-05-19 19:31:41.884369 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-19 19:31:41.884506 | orchestrator | Monday 19 May 2025 19:31:41 +0000 (0:00:00.113) 0:00:00.113 ************ 2025-05-19 19:31:42.402311 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:31:42.403477 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:31:42.405951 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:31:42.406710 | orchestrator | 2025-05-19 19:31:42.408503 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-19 19:31:42.409560 | orchestrator | Monday 19 May 2025 19:31:42 +0000 (0:00:00.520) 0:00:00.633 ************ 2025-05-19 19:31:42.551527 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:31:42.634964 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:31:42.635132 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:31:42.635692 | orchestrator | 2025-05-19 19:31:42.636120 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-19 19:31:42.636830 | orchestrator | Monday 19 May 2025 19:31:42 +0000 (0:00:00.231) 0:00:00.865 ************ 2025-05-19 19:31:43.268221 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:31:43.268915 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:31:43.269552 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:31:43.272252 | orchestrator | 2025-05-19 19:31:43.272620 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-19 19:31:43.272829 | orchestrator | Monday 19 May 2025 19:31:43 +0000 (0:00:00.633) 0:00:01.498 ************ 2025-05-19 19:31:43.405893 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:31:43.499272 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:31:43.499385 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:31:43.499495 | orchestrator | 2025-05-19 19:31:43.499553 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-19 19:31:43.499776 | orchestrator | Monday 19 May 2025 19:31:43 +0000 (0:00:00.233) 0:00:01.732 ************ 2025-05-19 19:31:44.654397 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 19:31:44.655343 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 19:31:44.655381 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 19:31:44.655393 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 19:31:44.655421 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 19:31:44.655432 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 19:31:44.655443 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 19:31:44.655925 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 19:31:44.655948 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 19:31:44.656326 | orchestrator | 2025-05-19 19:31:44.656666 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-19 19:31:44.656692 | orchestrator | Monday 19 May 2025 19:31:44 +0000 (0:00:01.147) 0:00:02.879 ************ 2025-05-19 19:31:45.955314 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 19:31:45.956113 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 19:31:45.956181 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 19:31:45.956609 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 19:31:45.957315 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 19:31:45.957763 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 19:31:45.958622 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 19:31:45.959174 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 19:31:45.959820 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 19:31:45.960290 | orchestrator | 2025-05-19 19:31:45.963615 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-19 19:31:45.963626 | orchestrator | Monday 19 May 2025 19:31:45 +0000 (0:00:01.301) 0:00:04.181 ************ 2025-05-19 19:31:48.086010 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 19:31:48.086268 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 19:31:48.086490 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 19:31:48.086794 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 19:31:48.087381 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 19:31:48.087918 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 19:31:48.088482 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 19:31:48.088917 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 19:31:48.089525 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 19:31:48.090187 | orchestrator | 2025-05-19 19:31:48.090728 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-19 19:31:48.093016 | orchestrator | Monday 19 May 2025 19:31:48 +0000 (0:00:02.134) 0:00:06.315 ************ 2025-05-19 19:31:48.677386 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:31:48.677515 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:31:48.678419 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:31:48.678529 | orchestrator | 2025-05-19 19:31:48.678872 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-19 19:31:48.679077 | orchestrator | Monday 19 May 2025 19:31:48 +0000 (0:00:00.594) 0:00:06.910 ************ 2025-05-19 19:31:49.277739 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:31:49.277871 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:31:49.277894 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:31:49.278009 | orchestrator | 2025-05-19 19:31:49.278630 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:31:49.278853 | orchestrator | 2025-05-19 19:31:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:31:49.279240 | orchestrator | 2025-05-19 19:31:49 | INFO  | Please wait and do not abort execution. 2025-05-19 19:31:49.279909 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:31:49.282717 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:31:49.282807 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:31:49.282844 | orchestrator | 2025-05-19 19:31:49.282852 | orchestrator | Monday 19 May 2025 19:31:49 +0000 (0:00:00.599) 0:00:07.509 ************ 2025-05-19 19:31:49.282859 | orchestrator | =============================================================================== 2025-05-19 19:31:49.282865 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2025-05-19 19:31:49.282905 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.30s 2025-05-19 19:31:49.283291 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-05-19 19:31:49.283512 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2025-05-19 19:31:49.287313 | orchestrator | Request device events from the kernel ----------------------------------- 0.60s 2025-05-19 19:31:49.287352 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-05-19 19:31:49.287358 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.52s 2025-05-19 19:31:49.287363 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-05-19 19:31:49.287369 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-05-19 19:31:51.240518 | orchestrator | 2025-05-19 19:31:51 | INFO  | Task e9e437d3-7595-4abd-99b0-7dbfe151e170 (facts) was prepared for execution. 2025-05-19 19:31:51.240707 | orchestrator | 2025-05-19 19:31:51 | INFO  | It takes a moment until task e9e437d3-7595-4abd-99b0-7dbfe151e170 (facts) has been started and output is visible here. 2025-05-19 19:31:54.314643 | orchestrator | 2025-05-19 19:31:54.316423 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 19:31:54.317240 | orchestrator | 2025-05-19 19:31:54.317937 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 19:31:54.318539 | orchestrator | Monday 19 May 2025 19:31:54 +0000 (0:00:00.194) 0:00:00.194 ************ 2025-05-19 19:31:55.304128 | orchestrator | ok: [testbed-manager] 2025-05-19 19:31:55.304413 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:31:55.305329 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:31:55.306863 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:31:55.307559 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:31:55.309806 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:31:55.310264 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:31:55.312207 | orchestrator | 2025-05-19 19:31:55.313212 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 19:31:55.314224 | orchestrator | Monday 19 May 2025 19:31:55 +0000 (0:00:00.991) 0:00:01.185 ************ 2025-05-19 19:31:55.472526 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:31:55.551732 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:31:55.634652 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:31:55.718274 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:31:55.794346 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:31:56.543704 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:31:56.544241 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:31:56.544794 | orchestrator | 2025-05-19 19:31:56.545240 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 19:31:56.545777 | orchestrator | 2025-05-19 19:31:56.546488 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 19:31:56.547139 | orchestrator | Monday 19 May 2025 19:31:56 +0000 (0:00:01.234) 0:00:02.420 ************ 2025-05-19 19:32:01.079261 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:32:01.079397 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:32:01.080621 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:32:01.081359 | orchestrator | ok: [testbed-manager] 2025-05-19 19:32:01.082738 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:32:01.086264 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:32:01.086704 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:32:01.087185 | orchestrator | 2025-05-19 19:32:01.088097 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 19:32:01.088472 | orchestrator | 2025-05-19 19:32:01.088829 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 19:32:01.090369 | orchestrator | Monday 19 May 2025 19:32:01 +0000 (0:00:04.541) 0:00:06.962 ************ 2025-05-19 19:32:01.417924 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:32:01.489568 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:32:01.565844 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:32:01.641484 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:32:01.720130 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:01.761649 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:01.761755 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:01.761770 | orchestrator | 2025-05-19 19:32:01.762443 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:32:01.762541 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.762557 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.762569 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.762581 | orchestrator | 2025-05-19 19:32:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:32:01.762594 | orchestrator | 2025-05-19 19:32:01 | INFO  | Please wait and do not abort execution. 2025-05-19 19:32:01.762606 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.762778 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.763307 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.763405 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:32:01.764660 | orchestrator | 2025-05-19 19:32:01.764694 | orchestrator | Monday 19 May 2025 19:32:01 +0000 (0:00:00.680) 0:00:07.642 ************ 2025-05-19 19:32:01.764919 | orchestrator | =============================================================================== 2025-05-19 19:32:01.765425 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.54s 2025-05-19 19:32:01.765776 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2025-05-19 19:32:01.766258 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2025-05-19 19:32:01.766284 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.68s 2025-05-19 19:32:04.854105 | orchestrator | 2025-05-19 19:32:04 | INFO  | Task 98c6477f-25f3-48d5-9b1b-351daa8ceba5 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-19 19:32:04.854295 | orchestrator | 2025-05-19 19:32:04 | INFO  | It takes a moment until task 98c6477f-25f3-48d5-9b1b-351daa8ceba5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-19 19:32:08.264102 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-19 19:32:08.798099 | orchestrator | 2025-05-19 19:32:08.798304 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 19:32:08.798408 | orchestrator | 2025-05-19 19:32:08.798930 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 19:32:08.802521 | orchestrator | Monday 19 May 2025 19:32:08 +0000 (0:00:00.463) 0:00:00.463 ************ 2025-05-19 19:32:09.083232 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 19:32:09.086101 | orchestrator | 2025-05-19 19:32:09.087183 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 19:32:09.087754 | orchestrator | Monday 19 May 2025 19:32:09 +0000 (0:00:00.286) 0:00:00.750 ************ 2025-05-19 19:32:09.347974 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:32:09.350411 | orchestrator | 2025-05-19 19:32:09.352955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:09.354831 | orchestrator | Monday 19 May 2025 19:32:09 +0000 (0:00:00.265) 0:00:01.016 ************ 2025-05-19 19:32:10.152470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-19 19:32:10.152643 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-19 19:32:10.153912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-19 19:32:10.154892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-19 19:32:10.157885 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-19 19:32:10.158516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-19 19:32:10.159714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-19 19:32:10.160457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-19 19:32:10.161096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-19 19:32:10.162095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-19 19:32:10.162659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-19 19:32:10.163391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-19 19:32:10.164213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-19 19:32:10.164323 | orchestrator | 2025-05-19 19:32:10.164721 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:10.165344 | orchestrator | Monday 19 May 2025 19:32:10 +0000 (0:00:00.798) 0:00:01.814 ************ 2025-05-19 19:32:10.375668 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:10.376449 | orchestrator | 2025-05-19 19:32:10.377271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:10.381237 | orchestrator | Monday 19 May 2025 19:32:10 +0000 (0:00:00.227) 0:00:02.042 ************ 2025-05-19 19:32:10.578421 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:10.578524 | orchestrator | 2025-05-19 19:32:10.580137 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:10.581767 | orchestrator | Monday 19 May 2025 19:32:10 +0000 (0:00:00.204) 0:00:02.247 ************ 2025-05-19 19:32:10.790601 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:10.791896 | orchestrator | 2025-05-19 19:32:10.793897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:10.794769 | orchestrator | Monday 19 May 2025 19:32:10 +0000 (0:00:00.211) 0:00:02.458 ************ 2025-05-19 19:32:11.029389 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:11.029840 | orchestrator | 2025-05-19 19:32:11.031655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:11.033448 | orchestrator | Monday 19 May 2025 19:32:11 +0000 (0:00:00.241) 0:00:02.699 ************ 2025-05-19 19:32:11.259566 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:11.261390 | orchestrator | 2025-05-19 19:32:11.264251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:11.266581 | orchestrator | Monday 19 May 2025 19:32:11 +0000 (0:00:00.229) 0:00:02.929 ************ 2025-05-19 19:32:11.462949 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:11.463030 | orchestrator | 2025-05-19 19:32:11.463846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:11.464591 | orchestrator | Monday 19 May 2025 19:32:11 +0000 (0:00:00.201) 0:00:03.130 ************ 2025-05-19 19:32:11.669362 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:11.669480 | orchestrator | 2025-05-19 19:32:11.672480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:11.673133 | orchestrator | Monday 19 May 2025 19:32:11 +0000 (0:00:00.207) 0:00:03.338 ************ 2025-05-19 19:32:11.860636 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:11.860734 | orchestrator | 2025-05-19 19:32:11.860745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:11.860794 | orchestrator | Monday 19 May 2025 19:32:11 +0000 (0:00:00.190) 0:00:03.528 ************ 2025-05-19 19:32:12.485091 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4) 2025-05-19 19:32:12.485243 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4) 2025-05-19 19:32:12.485322 | orchestrator | 2025-05-19 19:32:12.485537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:12.485786 | orchestrator | Monday 19 May 2025 19:32:12 +0000 (0:00:00.626) 0:00:04.155 ************ 2025-05-19 19:32:13.419878 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199) 2025-05-19 19:32:13.420618 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199) 2025-05-19 19:32:13.422439 | orchestrator | 2025-05-19 19:32:13.426818 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:13.427980 | orchestrator | Monday 19 May 2025 19:32:13 +0000 (0:00:00.919) 0:00:05.074 ************ 2025-05-19 19:32:13.900813 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2) 2025-05-19 19:32:13.900914 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2) 2025-05-19 19:32:13.900959 | orchestrator | 2025-05-19 19:32:13.903657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:13.904040 | orchestrator | Monday 19 May 2025 19:32:13 +0000 (0:00:00.492) 0:00:05.567 ************ 2025-05-19 19:32:14.455644 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53) 2025-05-19 19:32:14.456284 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53) 2025-05-19 19:32:14.456948 | orchestrator | 2025-05-19 19:32:14.459747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:14.460237 | orchestrator | Monday 19 May 2025 19:32:14 +0000 (0:00:00.557) 0:00:06.125 ************ 2025-05-19 19:32:14.809739 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 19:32:14.812949 | orchestrator | 2025-05-19 19:32:14.813767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:14.815677 | orchestrator | Monday 19 May 2025 19:32:14 +0000 (0:00:00.351) 0:00:06.477 ************ 2025-05-19 19:32:15.283245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-19 19:32:15.283344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-19 19:32:15.283447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-19 19:32:15.284158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-19 19:32:15.284207 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-19 19:32:15.284638 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-19 19:32:15.287907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-19 19:32:15.287968 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-19 19:32:15.289221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-19 19:32:15.289721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-19 19:32:15.290088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-19 19:32:15.290439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-19 19:32:15.291065 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-19 19:32:15.292184 | orchestrator | 2025-05-19 19:32:15.293506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:15.293892 | orchestrator | Monday 19 May 2025 19:32:15 +0000 (0:00:00.475) 0:00:06.952 ************ 2025-05-19 19:32:15.514357 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:15.514500 | orchestrator | 2025-05-19 19:32:15.514623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:15.514903 | orchestrator | Monday 19 May 2025 19:32:15 +0000 (0:00:00.231) 0:00:07.183 ************ 2025-05-19 19:32:15.855191 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:15.855317 | orchestrator | 2025-05-19 19:32:15.855334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:15.855347 | orchestrator | Monday 19 May 2025 19:32:15 +0000 (0:00:00.337) 0:00:07.521 ************ 2025-05-19 19:32:16.068850 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:16.071325 | orchestrator | 2025-05-19 19:32:16.071358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:16.071372 | orchestrator | Monday 19 May 2025 19:32:16 +0000 (0:00:00.218) 0:00:07.739 ************ 2025-05-19 19:32:16.287224 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:16.287362 | orchestrator | 2025-05-19 19:32:16.288083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:16.288965 | orchestrator | Monday 19 May 2025 19:32:16 +0000 (0:00:00.217) 0:00:07.956 ************ 2025-05-19 19:32:16.903289 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:16.903415 | orchestrator | 2025-05-19 19:32:16.903574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:16.903959 | orchestrator | Monday 19 May 2025 19:32:16 +0000 (0:00:00.611) 0:00:08.568 ************ 2025-05-19 19:32:17.113825 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:17.113909 | orchestrator | 2025-05-19 19:32:17.113916 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:17.113922 | orchestrator | Monday 19 May 2025 19:32:17 +0000 (0:00:00.210) 0:00:08.779 ************ 2025-05-19 19:32:17.325891 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:17.326201 | orchestrator | 2025-05-19 19:32:17.326231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:17.326388 | orchestrator | Monday 19 May 2025 19:32:17 +0000 (0:00:00.218) 0:00:08.997 ************ 2025-05-19 19:32:17.539618 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:17.539742 | orchestrator | 2025-05-19 19:32:17.539765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:17.539783 | orchestrator | Monday 19 May 2025 19:32:17 +0000 (0:00:00.209) 0:00:09.206 ************ 2025-05-19 19:32:18.137820 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-19 19:32:18.137900 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-19 19:32:18.137933 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-19 19:32:18.138074 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-19 19:32:18.138283 | orchestrator | 2025-05-19 19:32:18.138504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:18.139242 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.601) 0:00:09.808 ************ 2025-05-19 19:32:18.299701 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:18.300690 | orchestrator | 2025-05-19 19:32:18.300844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:18.301192 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.160) 0:00:09.969 ************ 2025-05-19 19:32:18.455290 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:18.455991 | orchestrator | 2025-05-19 19:32:18.456011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:18.456020 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.156) 0:00:10.126 ************ 2025-05-19 19:32:18.595939 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:18.596719 | orchestrator | 2025-05-19 19:32:18.596746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:18.596756 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.138) 0:00:10.264 ************ 2025-05-19 19:32:18.734669 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:18.734765 | orchestrator | 2025-05-19 19:32:18.734848 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 19:32:18.734862 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.141) 0:00:10.406 ************ 2025-05-19 19:32:18.870523 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-19 19:32:18.870640 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-19 19:32:18.870760 | orchestrator | 2025-05-19 19:32:18.870781 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 19:32:18.871109 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.133) 0:00:10.539 ************ 2025-05-19 19:32:18.969431 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:18.969614 | orchestrator | 2025-05-19 19:32:18.971224 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 19:32:18.971336 | orchestrator | Monday 19 May 2025 19:32:18 +0000 (0:00:00.100) 0:00:10.640 ************ 2025-05-19 19:32:19.210596 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:19.211925 | orchestrator | 2025-05-19 19:32:19.212101 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 19:32:19.212676 | orchestrator | Monday 19 May 2025 19:32:19 +0000 (0:00:00.239) 0:00:10.879 ************ 2025-05-19 19:32:19.340800 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:19.341294 | orchestrator | 2025-05-19 19:32:19.342415 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 19:32:19.343052 | orchestrator | Monday 19 May 2025 19:32:19 +0000 (0:00:00.131) 0:00:11.010 ************ 2025-05-19 19:32:19.477677 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:32:19.478272 | orchestrator | 2025-05-19 19:32:19.478985 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 19:32:19.481506 | orchestrator | Monday 19 May 2025 19:32:19 +0000 (0:00:00.136) 0:00:11.147 ************ 2025-05-19 19:32:19.648632 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6eb1ee5c-85e6-559d-849b-4772bddae6d6'}}) 2025-05-19 19:32:19.648713 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '702b6aa6-b3de-5669-bdb1-4e94528c6268'}}) 2025-05-19 19:32:19.649636 | orchestrator | 2025-05-19 19:32:19.650732 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 19:32:19.651054 | orchestrator | Monday 19 May 2025 19:32:19 +0000 (0:00:00.170) 0:00:11.318 ************ 2025-05-19 19:32:19.829585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6eb1ee5c-85e6-559d-849b-4772bddae6d6'}})  2025-05-19 19:32:19.830193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '702b6aa6-b3de-5669-bdb1-4e94528c6268'}})  2025-05-19 19:32:19.830761 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:19.831716 | orchestrator | 2025-05-19 19:32:19.832806 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 19:32:19.833250 | orchestrator | Monday 19 May 2025 19:32:19 +0000 (0:00:00.181) 0:00:11.499 ************ 2025-05-19 19:32:19.994261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6eb1ee5c-85e6-559d-849b-4772bddae6d6'}})  2025-05-19 19:32:19.994779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '702b6aa6-b3de-5669-bdb1-4e94528c6268'}})  2025-05-19 19:32:19.995196 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:19.997751 | orchestrator | 2025-05-19 19:32:19.998309 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 19:32:19.998772 | orchestrator | Monday 19 May 2025 19:32:19 +0000 (0:00:00.163) 0:00:11.663 ************ 2025-05-19 19:32:20.147078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6eb1ee5c-85e6-559d-849b-4772bddae6d6'}})  2025-05-19 19:32:20.148298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '702b6aa6-b3de-5669-bdb1-4e94528c6268'}})  2025-05-19 19:32:20.149168 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:20.149958 | orchestrator | 2025-05-19 19:32:20.150589 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 19:32:20.152673 | orchestrator | Monday 19 May 2025 19:32:20 +0000 (0:00:00.154) 0:00:11.818 ************ 2025-05-19 19:32:20.301364 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:32:20.303530 | orchestrator | 2025-05-19 19:32:20.303612 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 19:32:20.303636 | orchestrator | Monday 19 May 2025 19:32:20 +0000 (0:00:00.154) 0:00:11.972 ************ 2025-05-19 19:32:20.441446 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:32:20.441805 | orchestrator | 2025-05-19 19:32:20.442365 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 19:32:20.443083 | orchestrator | Monday 19 May 2025 19:32:20 +0000 (0:00:00.139) 0:00:12.112 ************ 2025-05-19 19:32:20.585019 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:20.586731 | orchestrator | 2025-05-19 19:32:20.589367 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 19:32:20.590229 | orchestrator | Monday 19 May 2025 19:32:20 +0000 (0:00:00.142) 0:00:12.254 ************ 2025-05-19 19:32:20.727583 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:20.729169 | orchestrator | 2025-05-19 19:32:20.732288 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 19:32:20.732586 | orchestrator | Monday 19 May 2025 19:32:20 +0000 (0:00:00.138) 0:00:12.393 ************ 2025-05-19 19:32:20.879121 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:20.879695 | orchestrator | 2025-05-19 19:32:20.879748 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 19:32:20.880163 | orchestrator | Monday 19 May 2025 19:32:20 +0000 (0:00:00.156) 0:00:12.549 ************ 2025-05-19 19:32:21.174542 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:32:21.174725 | orchestrator |  "ceph_osd_devices": { 2025-05-19 19:32:21.174743 | orchestrator |  "sdb": { 2025-05-19 19:32:21.175311 | orchestrator |  "osd_lvm_uuid": "6eb1ee5c-85e6-559d-849b-4772bddae6d6" 2025-05-19 19:32:21.175427 | orchestrator |  }, 2025-05-19 19:32:21.177977 | orchestrator |  "sdc": { 2025-05-19 19:32:21.178056 | orchestrator |  "osd_lvm_uuid": "702b6aa6-b3de-5669-bdb1-4e94528c6268" 2025-05-19 19:32:21.178070 | orchestrator |  } 2025-05-19 19:32:21.178279 | orchestrator |  } 2025-05-19 19:32:21.178480 | orchestrator | } 2025-05-19 19:32:21.179347 | orchestrator | 2025-05-19 19:32:21.179537 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 19:32:21.179608 | orchestrator | Monday 19 May 2025 19:32:21 +0000 (0:00:00.292) 0:00:12.842 ************ 2025-05-19 19:32:21.322593 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:21.323202 | orchestrator | 2025-05-19 19:32:21.324118 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 19:32:21.326288 | orchestrator | Monday 19 May 2025 19:32:21 +0000 (0:00:00.150) 0:00:12.992 ************ 2025-05-19 19:32:21.441489 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:21.443829 | orchestrator | 2025-05-19 19:32:21.444063 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 19:32:21.444091 | orchestrator | Monday 19 May 2025 19:32:21 +0000 (0:00:00.119) 0:00:13.111 ************ 2025-05-19 19:32:21.574218 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:32:21.575124 | orchestrator | 2025-05-19 19:32:21.575420 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 19:32:21.576328 | orchestrator | Monday 19 May 2025 19:32:21 +0000 (0:00:00.132) 0:00:13.243 ************ 2025-05-19 19:32:21.806720 | orchestrator | changed: [testbed-node-3] => { 2025-05-19 19:32:21.806838 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 19:32:21.806853 | orchestrator |  "ceph_osd_devices": { 2025-05-19 19:32:21.806940 | orchestrator |  "sdb": { 2025-05-19 19:32:21.809707 | orchestrator |  "osd_lvm_uuid": "6eb1ee5c-85e6-559d-849b-4772bddae6d6" 2025-05-19 19:32:21.809974 | orchestrator |  }, 2025-05-19 19:32:21.811004 | orchestrator |  "sdc": { 2025-05-19 19:32:21.814176 | orchestrator |  "osd_lvm_uuid": "702b6aa6-b3de-5669-bdb1-4e94528c6268" 2025-05-19 19:32:21.815242 | orchestrator |  } 2025-05-19 19:32:21.815257 | orchestrator |  }, 2025-05-19 19:32:21.815263 | orchestrator |  "lvm_volumes": [ 2025-05-19 19:32:21.816163 | orchestrator |  { 2025-05-19 19:32:21.816495 | orchestrator |  "data": "osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6", 2025-05-19 19:32:21.817051 | orchestrator |  "data_vg": "ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6" 2025-05-19 19:32:21.817778 | orchestrator |  }, 2025-05-19 19:32:21.818369 | orchestrator |  { 2025-05-19 19:32:21.818627 | orchestrator |  "data": "osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268", 2025-05-19 19:32:21.819219 | orchestrator |  "data_vg": "ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268" 2025-05-19 19:32:21.819412 | orchestrator |  } 2025-05-19 19:32:21.819849 | orchestrator |  ] 2025-05-19 19:32:21.820116 | orchestrator |  } 2025-05-19 19:32:21.820487 | orchestrator | } 2025-05-19 19:32:21.822154 | orchestrator | 2025-05-19 19:32:21.822222 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 19:32:21.822511 | orchestrator | Monday 19 May 2025 19:32:21 +0000 (0:00:00.226) 0:00:13.470 ************ 2025-05-19 19:32:23.407443 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 19:32:23.407532 | orchestrator | 2025-05-19 19:32:23.407541 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 19:32:23.408028 | orchestrator | 2025-05-19 19:32:23.409404 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 19:32:23.409697 | orchestrator | Monday 19 May 2025 19:32:23 +0000 (0:00:01.601) 0:00:15.071 ************ 2025-05-19 19:32:23.593354 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 19:32:23.594159 | orchestrator | 2025-05-19 19:32:23.597415 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 19:32:23.597991 | orchestrator | Monday 19 May 2025 19:32:23 +0000 (0:00:00.191) 0:00:15.263 ************ 2025-05-19 19:32:23.791715 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:32:23.791939 | orchestrator | 2025-05-19 19:32:23.793126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:23.793655 | orchestrator | Monday 19 May 2025 19:32:23 +0000 (0:00:00.199) 0:00:15.462 ************ 2025-05-19 19:32:24.110895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-19 19:32:24.111905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-19 19:32:24.113212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-19 19:32:24.113586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-19 19:32:24.114415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-19 19:32:24.115354 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-19 19:32:24.116350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-19 19:32:24.117317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-19 19:32:24.118992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-19 19:32:24.119710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-19 19:32:24.120726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-19 19:32:24.121364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-19 19:32:24.122162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-19 19:32:24.122604 | orchestrator | 2025-05-19 19:32:24.123630 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:24.124087 | orchestrator | Monday 19 May 2025 19:32:24 +0000 (0:00:00.317) 0:00:15.780 ************ 2025-05-19 19:32:24.265099 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:24.265266 | orchestrator | 2025-05-19 19:32:24.265284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:24.266092 | orchestrator | Monday 19 May 2025 19:32:24 +0000 (0:00:00.152) 0:00:15.933 ************ 2025-05-19 19:32:24.436774 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:24.438481 | orchestrator | 2025-05-19 19:32:24.438613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:24.438978 | orchestrator | Monday 19 May 2025 19:32:24 +0000 (0:00:00.172) 0:00:16.106 ************ 2025-05-19 19:32:24.604319 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:24.604509 | orchestrator | 2025-05-19 19:32:24.604854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:24.605392 | orchestrator | Monday 19 May 2025 19:32:24 +0000 (0:00:00.168) 0:00:16.274 ************ 2025-05-19 19:32:24.775997 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:24.776134 | orchestrator | 2025-05-19 19:32:24.778733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:24.778766 | orchestrator | Monday 19 May 2025 19:32:24 +0000 (0:00:00.170) 0:00:16.444 ************ 2025-05-19 19:32:25.223867 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:25.224650 | orchestrator | 2025-05-19 19:32:25.227548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:25.228371 | orchestrator | Monday 19 May 2025 19:32:25 +0000 (0:00:00.450) 0:00:16.895 ************ 2025-05-19 19:32:25.416362 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:25.416471 | orchestrator | 2025-05-19 19:32:25.417368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:25.418265 | orchestrator | Monday 19 May 2025 19:32:25 +0000 (0:00:00.191) 0:00:17.086 ************ 2025-05-19 19:32:25.590735 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:25.590850 | orchestrator | 2025-05-19 19:32:25.590934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:25.591162 | orchestrator | Monday 19 May 2025 19:32:25 +0000 (0:00:00.174) 0:00:17.261 ************ 2025-05-19 19:32:25.766759 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:25.767009 | orchestrator | 2025-05-19 19:32:25.767889 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:25.769540 | orchestrator | Monday 19 May 2025 19:32:25 +0000 (0:00:00.175) 0:00:17.437 ************ 2025-05-19 19:32:26.107334 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677) 2025-05-19 19:32:26.108587 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677) 2025-05-19 19:32:26.108619 | orchestrator | 2025-05-19 19:32:26.108672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:26.109492 | orchestrator | Monday 19 May 2025 19:32:26 +0000 (0:00:00.339) 0:00:17.776 ************ 2025-05-19 19:32:26.512399 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3) 2025-05-19 19:32:26.513197 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3) 2025-05-19 19:32:26.515610 | orchestrator | 2025-05-19 19:32:26.517232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:26.517530 | orchestrator | Monday 19 May 2025 19:32:26 +0000 (0:00:00.404) 0:00:18.181 ************ 2025-05-19 19:32:26.892761 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3) 2025-05-19 19:32:26.896987 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3) 2025-05-19 19:32:26.897003 | orchestrator | 2025-05-19 19:32:26.897009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:26.897013 | orchestrator | Monday 19 May 2025 19:32:26 +0000 (0:00:00.381) 0:00:18.562 ************ 2025-05-19 19:32:27.287645 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d) 2025-05-19 19:32:27.288061 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d) 2025-05-19 19:32:27.288366 | orchestrator | 2025-05-19 19:32:27.289389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:27.290873 | orchestrator | Monday 19 May 2025 19:32:27 +0000 (0:00:00.392) 0:00:18.955 ************ 2025-05-19 19:32:27.606988 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 19:32:27.607121 | orchestrator | 2025-05-19 19:32:27.607200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:27.607215 | orchestrator | Monday 19 May 2025 19:32:27 +0000 (0:00:00.317) 0:00:19.272 ************ 2025-05-19 19:32:28.419762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-19 19:32:28.420779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-19 19:32:28.422864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-19 19:32:28.423612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-19 19:32:28.424702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-19 19:32:28.428687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-19 19:32:28.430441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-19 19:32:28.432929 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-19 19:32:28.433193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-19 19:32:28.435402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-19 19:32:28.435806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-19 19:32:28.436333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-19 19:32:28.437288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-19 19:32:28.437311 | orchestrator | 2025-05-19 19:32:28.437997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:28.439800 | orchestrator | Monday 19 May 2025 19:32:28 +0000 (0:00:00.814) 0:00:20.087 ************ 2025-05-19 19:32:28.629060 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:28.629985 | orchestrator | 2025-05-19 19:32:28.630074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:28.630504 | orchestrator | Monday 19 May 2025 19:32:28 +0000 (0:00:00.210) 0:00:20.297 ************ 2025-05-19 19:32:28.837021 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:28.837275 | orchestrator | 2025-05-19 19:32:28.841562 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:28.842749 | orchestrator | Monday 19 May 2025 19:32:28 +0000 (0:00:00.207) 0:00:20.505 ************ 2025-05-19 19:32:29.064953 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:29.067725 | orchestrator | 2025-05-19 19:32:29.067768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:29.067782 | orchestrator | Monday 19 May 2025 19:32:29 +0000 (0:00:00.225) 0:00:20.731 ************ 2025-05-19 19:32:29.267687 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:29.267923 | orchestrator | 2025-05-19 19:32:29.269130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:29.272025 | orchestrator | Monday 19 May 2025 19:32:29 +0000 (0:00:00.204) 0:00:20.936 ************ 2025-05-19 19:32:29.471597 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:29.471770 | orchestrator | 2025-05-19 19:32:29.473179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:29.476554 | orchestrator | Monday 19 May 2025 19:32:29 +0000 (0:00:00.204) 0:00:21.140 ************ 2025-05-19 19:32:29.668649 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:29.669557 | orchestrator | 2025-05-19 19:32:29.670904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:29.671688 | orchestrator | Monday 19 May 2025 19:32:29 +0000 (0:00:00.194) 0:00:21.335 ************ 2025-05-19 19:32:29.879741 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:29.880768 | orchestrator | 2025-05-19 19:32:29.887560 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:29.887628 | orchestrator | Monday 19 May 2025 19:32:29 +0000 (0:00:00.213) 0:00:21.548 ************ 2025-05-19 19:32:30.086632 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:30.086737 | orchestrator | 2025-05-19 19:32:30.088737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:30.088771 | orchestrator | Monday 19 May 2025 19:32:30 +0000 (0:00:00.205) 0:00:21.754 ************ 2025-05-19 19:32:30.928774 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-19 19:32:30.929609 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-19 19:32:30.930745 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-19 19:32:30.933355 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-19 19:32:30.934568 | orchestrator | 2025-05-19 19:32:30.935634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:30.936714 | orchestrator | Monday 19 May 2025 19:32:30 +0000 (0:00:00.843) 0:00:22.598 ************ 2025-05-19 19:32:31.127097 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:31.128367 | orchestrator | 2025-05-19 19:32:31.129403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:31.130914 | orchestrator | Monday 19 May 2025 19:32:31 +0000 (0:00:00.197) 0:00:22.796 ************ 2025-05-19 19:32:31.757679 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:31.758883 | orchestrator | 2025-05-19 19:32:31.760673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:31.761576 | orchestrator | Monday 19 May 2025 19:32:31 +0000 (0:00:00.630) 0:00:23.426 ************ 2025-05-19 19:32:31.933208 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:31.933704 | orchestrator | 2025-05-19 19:32:31.934364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:31.934950 | orchestrator | Monday 19 May 2025 19:32:31 +0000 (0:00:00.176) 0:00:23.603 ************ 2025-05-19 19:32:32.126320 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:32.127382 | orchestrator | 2025-05-19 19:32:32.129851 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 19:32:32.129878 | orchestrator | Monday 19 May 2025 19:32:32 +0000 (0:00:00.192) 0:00:23.795 ************ 2025-05-19 19:32:32.298553 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-19 19:32:32.298900 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-19 19:32:32.299578 | orchestrator | 2025-05-19 19:32:32.300315 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 19:32:32.301002 | orchestrator | Monday 19 May 2025 19:32:32 +0000 (0:00:00.172) 0:00:23.968 ************ 2025-05-19 19:32:32.449705 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:32.450067 | orchestrator | 2025-05-19 19:32:32.451120 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 19:32:32.453607 | orchestrator | Monday 19 May 2025 19:32:32 +0000 (0:00:00.150) 0:00:24.119 ************ 2025-05-19 19:32:32.597348 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:32.598246 | orchestrator | 2025-05-19 19:32:32.600126 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 19:32:32.601438 | orchestrator | Monday 19 May 2025 19:32:32 +0000 (0:00:00.147) 0:00:24.266 ************ 2025-05-19 19:32:32.744267 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:32.745934 | orchestrator | 2025-05-19 19:32:32.746521 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 19:32:32.747519 | orchestrator | Monday 19 May 2025 19:32:32 +0000 (0:00:00.147) 0:00:24.414 ************ 2025-05-19 19:32:32.888790 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:32:32.888911 | orchestrator | 2025-05-19 19:32:32.889004 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 19:32:32.889563 | orchestrator | Monday 19 May 2025 19:32:32 +0000 (0:00:00.144) 0:00:24.559 ************ 2025-05-19 19:32:33.069414 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}}) 2025-05-19 19:32:33.070573 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdf60fa-c839-55c0-9693-b393079e2a5b'}}) 2025-05-19 19:32:33.072223 | orchestrator | 2025-05-19 19:32:33.075585 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 19:32:33.075632 | orchestrator | Monday 19 May 2025 19:32:33 +0000 (0:00:00.179) 0:00:24.739 ************ 2025-05-19 19:32:33.233381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}})  2025-05-19 19:32:33.233806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdf60fa-c839-55c0-9693-b393079e2a5b'}})  2025-05-19 19:32:33.236254 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:33.238156 | orchestrator | 2025-05-19 19:32:33.238664 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 19:32:33.239121 | orchestrator | Monday 19 May 2025 19:32:33 +0000 (0:00:00.162) 0:00:24.902 ************ 2025-05-19 19:32:33.397229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}})  2025-05-19 19:32:33.398361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdf60fa-c839-55c0-9693-b393079e2a5b'}})  2025-05-19 19:32:33.399992 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:33.400937 | orchestrator | 2025-05-19 19:32:33.402089 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 19:32:33.402308 | orchestrator | Monday 19 May 2025 19:32:33 +0000 (0:00:00.164) 0:00:25.066 ************ 2025-05-19 19:32:33.748453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}})  2025-05-19 19:32:33.748646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdf60fa-c839-55c0-9693-b393079e2a5b'}})  2025-05-19 19:32:33.749923 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:33.752623 | orchestrator | 2025-05-19 19:32:33.752719 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 19:32:33.753458 | orchestrator | Monday 19 May 2025 19:32:33 +0000 (0:00:00.351) 0:00:25.417 ************ 2025-05-19 19:32:33.888213 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:32:33.888610 | orchestrator | 2025-05-19 19:32:33.890171 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 19:32:33.892555 | orchestrator | Monday 19 May 2025 19:32:33 +0000 (0:00:00.139) 0:00:25.557 ************ 2025-05-19 19:32:34.027743 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:32:34.027931 | orchestrator | 2025-05-19 19:32:34.028965 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 19:32:34.032112 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.138) 0:00:25.696 ************ 2025-05-19 19:32:34.165854 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:34.167816 | orchestrator | 2025-05-19 19:32:34.167890 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 19:32:34.170277 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.138) 0:00:25.834 ************ 2025-05-19 19:32:34.304367 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:34.305178 | orchestrator | 2025-05-19 19:32:34.306340 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 19:32:34.307263 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.138) 0:00:25.973 ************ 2025-05-19 19:32:34.446575 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:34.446678 | orchestrator | 2025-05-19 19:32:34.446685 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 19:32:34.446691 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.141) 0:00:26.115 ************ 2025-05-19 19:32:34.592420 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:32:34.592542 | orchestrator |  "ceph_osd_devices": { 2025-05-19 19:32:34.592870 | orchestrator |  "sdb": { 2025-05-19 19:32:34.593698 | orchestrator |  "osd_lvm_uuid": "54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e" 2025-05-19 19:32:34.594369 | orchestrator |  }, 2025-05-19 19:32:34.596335 | orchestrator |  "sdc": { 2025-05-19 19:32:34.596419 | orchestrator |  "osd_lvm_uuid": "5fdf60fa-c839-55c0-9693-b393079e2a5b" 2025-05-19 19:32:34.596436 | orchestrator |  } 2025-05-19 19:32:34.596449 | orchestrator |  } 2025-05-19 19:32:34.596534 | orchestrator | } 2025-05-19 19:32:34.596667 | orchestrator | 2025-05-19 19:32:34.597261 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 19:32:34.597355 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.146) 0:00:26.261 ************ 2025-05-19 19:32:34.733082 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:34.733191 | orchestrator | 2025-05-19 19:32:34.733244 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 19:32:34.733252 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.140) 0:00:26.401 ************ 2025-05-19 19:32:34.874484 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:34.874696 | orchestrator | 2025-05-19 19:32:34.874962 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 19:32:34.880353 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.143) 0:00:26.545 ************ 2025-05-19 19:32:34.993811 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:32:34.999023 | orchestrator | 2025-05-19 19:32:34.999604 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 19:32:35.000519 | orchestrator | Monday 19 May 2025 19:32:34 +0000 (0:00:00.117) 0:00:26.663 ************ 2025-05-19 19:32:35.471369 | orchestrator | changed: [testbed-node-4] => { 2025-05-19 19:32:35.474581 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 19:32:35.476596 | orchestrator |  "ceph_osd_devices": { 2025-05-19 19:32:35.477351 | orchestrator |  "sdb": { 2025-05-19 19:32:35.477940 | orchestrator |  "osd_lvm_uuid": "54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e" 2025-05-19 19:32:35.480964 | orchestrator |  }, 2025-05-19 19:32:35.481018 | orchestrator |  "sdc": { 2025-05-19 19:32:35.481034 | orchestrator |  "osd_lvm_uuid": "5fdf60fa-c839-55c0-9693-b393079e2a5b" 2025-05-19 19:32:35.481451 | orchestrator |  } 2025-05-19 19:32:35.481917 | orchestrator |  }, 2025-05-19 19:32:35.482667 | orchestrator |  "lvm_volumes": [ 2025-05-19 19:32:35.483125 | orchestrator |  { 2025-05-19 19:32:35.483667 | orchestrator |  "data": "osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e", 2025-05-19 19:32:35.484258 | orchestrator |  "data_vg": "ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e" 2025-05-19 19:32:35.484726 | orchestrator |  }, 2025-05-19 19:32:35.485265 | orchestrator |  { 2025-05-19 19:32:35.486128 | orchestrator |  "data": "osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b", 2025-05-19 19:32:35.487218 | orchestrator |  "data_vg": "ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b" 2025-05-19 19:32:35.487558 | orchestrator |  } 2025-05-19 19:32:35.488515 | orchestrator |  ] 2025-05-19 19:32:35.488922 | orchestrator |  } 2025-05-19 19:32:35.489310 | orchestrator | } 2025-05-19 19:32:35.489766 | orchestrator | 2025-05-19 19:32:35.490956 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 19:32:35.491572 | orchestrator | Monday 19 May 2025 19:32:35 +0000 (0:00:00.472) 0:00:27.136 ************ 2025-05-19 19:32:36.886840 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 19:32:36.887473 | orchestrator | 2025-05-19 19:32:36.889570 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 19:32:36.890531 | orchestrator | 2025-05-19 19:32:36.893433 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 19:32:36.893647 | orchestrator | Monday 19 May 2025 19:32:36 +0000 (0:00:01.415) 0:00:28.551 ************ 2025-05-19 19:32:37.118382 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 19:32:37.119049 | orchestrator | 2025-05-19 19:32:37.120043 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 19:32:37.120991 | orchestrator | Monday 19 May 2025 19:32:37 +0000 (0:00:00.234) 0:00:28.786 ************ 2025-05-19 19:32:37.346330 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:32:37.346440 | orchestrator | 2025-05-19 19:32:37.348901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:37.348928 | orchestrator | Monday 19 May 2025 19:32:37 +0000 (0:00:00.227) 0:00:29.013 ************ 2025-05-19 19:32:37.859802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-19 19:32:37.860539 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-19 19:32:37.861922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-19 19:32:37.865196 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-19 19:32:37.865223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-19 19:32:37.865251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-19 19:32:37.865262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-19 19:32:37.865466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-19 19:32:37.866261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-19 19:32:37.866814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-19 19:32:37.867468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-19 19:32:37.868067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-19 19:32:37.868350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-19 19:32:37.868716 | orchestrator | 2025-05-19 19:32:37.869781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:37.869865 | orchestrator | Monday 19 May 2025 19:32:37 +0000 (0:00:00.515) 0:00:29.529 ************ 2025-05-19 19:32:38.064937 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:38.065428 | orchestrator | 2025-05-19 19:32:38.066437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:38.066811 | orchestrator | Monday 19 May 2025 19:32:38 +0000 (0:00:00.204) 0:00:29.734 ************ 2025-05-19 19:32:38.264314 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:38.264830 | orchestrator | 2025-05-19 19:32:38.266575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:38.267245 | orchestrator | Monday 19 May 2025 19:32:38 +0000 (0:00:00.198) 0:00:29.933 ************ 2025-05-19 19:32:38.464245 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:38.464749 | orchestrator | 2025-05-19 19:32:38.465398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:38.465849 | orchestrator | Monday 19 May 2025 19:32:38 +0000 (0:00:00.200) 0:00:30.133 ************ 2025-05-19 19:32:38.645393 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:38.645474 | orchestrator | 2025-05-19 19:32:38.647570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:38.648358 | orchestrator | Monday 19 May 2025 19:32:38 +0000 (0:00:00.180) 0:00:30.314 ************ 2025-05-19 19:32:38.853969 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:38.854207 | orchestrator | 2025-05-19 19:32:38.856271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:38.856305 | orchestrator | Monday 19 May 2025 19:32:38 +0000 (0:00:00.207) 0:00:30.521 ************ 2025-05-19 19:32:39.046610 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:39.047465 | orchestrator | 2025-05-19 19:32:39.048116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:39.049173 | orchestrator | Monday 19 May 2025 19:32:39 +0000 (0:00:00.194) 0:00:30.716 ************ 2025-05-19 19:32:39.260845 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:39.262813 | orchestrator | 2025-05-19 19:32:39.264823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:39.265489 | orchestrator | Monday 19 May 2025 19:32:39 +0000 (0:00:00.212) 0:00:30.928 ************ 2025-05-19 19:32:39.482310 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:39.482470 | orchestrator | 2025-05-19 19:32:39.482488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:39.483426 | orchestrator | Monday 19 May 2025 19:32:39 +0000 (0:00:00.218) 0:00:31.147 ************ 2025-05-19 19:32:40.093855 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4) 2025-05-19 19:32:40.093962 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4) 2025-05-19 19:32:40.094544 | orchestrator | 2025-05-19 19:32:40.095302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:40.095936 | orchestrator | Monday 19 May 2025 19:32:40 +0000 (0:00:00.615) 0:00:31.763 ************ 2025-05-19 19:32:40.882266 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091) 2025-05-19 19:32:40.882332 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091) 2025-05-19 19:32:40.882714 | orchestrator | 2025-05-19 19:32:40.883231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:40.886178 | orchestrator | Monday 19 May 2025 19:32:40 +0000 (0:00:00.787) 0:00:32.550 ************ 2025-05-19 19:32:41.304205 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20) 2025-05-19 19:32:41.304367 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20) 2025-05-19 19:32:41.305020 | orchestrator | 2025-05-19 19:32:41.305748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:41.308769 | orchestrator | Monday 19 May 2025 19:32:41 +0000 (0:00:00.423) 0:00:32.973 ************ 2025-05-19 19:32:41.697905 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be) 2025-05-19 19:32:41.698267 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be) 2025-05-19 19:32:41.698755 | orchestrator | 2025-05-19 19:32:41.699531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:32:41.700623 | orchestrator | Monday 19 May 2025 19:32:41 +0000 (0:00:00.393) 0:00:33.367 ************ 2025-05-19 19:32:42.022596 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 19:32:42.022900 | orchestrator | 2025-05-19 19:32:42.023873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:42.024657 | orchestrator | Monday 19 May 2025 19:32:42 +0000 (0:00:00.324) 0:00:33.692 ************ 2025-05-19 19:32:42.438956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-19 19:32:42.439120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-19 19:32:42.439786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-19 19:32:42.442574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-19 19:32:42.442598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-19 19:32:42.442610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-19 19:32:42.442936 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-19 19:32:42.444125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-19 19:32:42.444423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-19 19:32:42.444955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-19 19:32:42.445415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-19 19:32:42.446248 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-19 19:32:42.446437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-19 19:32:42.446808 | orchestrator | 2025-05-19 19:32:42.447425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:42.447870 | orchestrator | Monday 19 May 2025 19:32:42 +0000 (0:00:00.415) 0:00:34.107 ************ 2025-05-19 19:32:42.639894 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:42.640207 | orchestrator | 2025-05-19 19:32:42.641185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:42.642187 | orchestrator | Monday 19 May 2025 19:32:42 +0000 (0:00:00.202) 0:00:34.309 ************ 2025-05-19 19:32:42.840728 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:42.841526 | orchestrator | 2025-05-19 19:32:42.842086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:42.842798 | orchestrator | Monday 19 May 2025 19:32:42 +0000 (0:00:00.200) 0:00:34.510 ************ 2025-05-19 19:32:43.053242 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:43.053464 | orchestrator | 2025-05-19 19:32:43.053632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:43.054509 | orchestrator | Monday 19 May 2025 19:32:43 +0000 (0:00:00.212) 0:00:34.723 ************ 2025-05-19 19:32:43.255751 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:43.255946 | orchestrator | 2025-05-19 19:32:43.256165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:43.256546 | orchestrator | Monday 19 May 2025 19:32:43 +0000 (0:00:00.202) 0:00:34.925 ************ 2025-05-19 19:32:43.451092 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:43.451530 | orchestrator | 2025-05-19 19:32:43.451795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:43.452260 | orchestrator | Monday 19 May 2025 19:32:43 +0000 (0:00:00.194) 0:00:35.120 ************ 2025-05-19 19:32:44.051228 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:44.051381 | orchestrator | 2025-05-19 19:32:44.052051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:44.052660 | orchestrator | Monday 19 May 2025 19:32:44 +0000 (0:00:00.600) 0:00:35.720 ************ 2025-05-19 19:32:44.280756 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:44.280856 | orchestrator | 2025-05-19 19:32:44.282491 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:44.282547 | orchestrator | Monday 19 May 2025 19:32:44 +0000 (0:00:00.229) 0:00:35.949 ************ 2025-05-19 19:32:44.495492 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:44.495760 | orchestrator | 2025-05-19 19:32:44.498270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:44.498313 | orchestrator | Monday 19 May 2025 19:32:44 +0000 (0:00:00.213) 0:00:36.163 ************ 2025-05-19 19:32:45.121876 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-19 19:32:45.122282 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-19 19:32:45.123410 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-19 19:32:45.123901 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-19 19:32:45.125554 | orchestrator | 2025-05-19 19:32:45.125598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:45.125919 | orchestrator | Monday 19 May 2025 19:32:45 +0000 (0:00:00.628) 0:00:36.791 ************ 2025-05-19 19:32:45.335184 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:45.335363 | orchestrator | 2025-05-19 19:32:45.337017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:45.337074 | orchestrator | Monday 19 May 2025 19:32:45 +0000 (0:00:00.213) 0:00:37.005 ************ 2025-05-19 19:32:45.526203 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:45.526441 | orchestrator | 2025-05-19 19:32:45.527289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:45.527809 | orchestrator | Monday 19 May 2025 19:32:45 +0000 (0:00:00.189) 0:00:37.194 ************ 2025-05-19 19:32:45.729853 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:45.729946 | orchestrator | 2025-05-19 19:32:45.730151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:32:45.731050 | orchestrator | Monday 19 May 2025 19:32:45 +0000 (0:00:00.204) 0:00:37.399 ************ 2025-05-19 19:32:45.947703 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:45.949379 | orchestrator | 2025-05-19 19:32:45.949438 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 19:32:45.949448 | orchestrator | Monday 19 May 2025 19:32:45 +0000 (0:00:00.216) 0:00:37.615 ************ 2025-05-19 19:32:46.127910 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-19 19:32:46.128286 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-19 19:32:46.128893 | orchestrator | 2025-05-19 19:32:46.130060 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 19:32:46.130666 | orchestrator | Monday 19 May 2025 19:32:46 +0000 (0:00:00.181) 0:00:37.797 ************ 2025-05-19 19:32:46.264827 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:46.265525 | orchestrator | 2025-05-19 19:32:46.268015 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 19:32:46.270469 | orchestrator | Monday 19 May 2025 19:32:46 +0000 (0:00:00.136) 0:00:37.934 ************ 2025-05-19 19:32:46.401093 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:46.401240 | orchestrator | 2025-05-19 19:32:46.403682 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 19:32:46.404053 | orchestrator | Monday 19 May 2025 19:32:46 +0000 (0:00:00.134) 0:00:38.068 ************ 2025-05-19 19:32:46.709687 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:46.710287 | orchestrator | 2025-05-19 19:32:46.711860 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 19:32:46.712747 | orchestrator | Monday 19 May 2025 19:32:46 +0000 (0:00:00.311) 0:00:38.379 ************ 2025-05-19 19:32:46.854283 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:32:46.854805 | orchestrator | 2025-05-19 19:32:46.856420 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 19:32:46.857164 | orchestrator | Monday 19 May 2025 19:32:46 +0000 (0:00:00.144) 0:00:38.524 ************ 2025-05-19 19:32:47.043440 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}}) 2025-05-19 19:32:47.043623 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5646b4ad-081a-5fe7-ab17-c0ecc5756623'}}) 2025-05-19 19:32:47.044171 | orchestrator | 2025-05-19 19:32:47.044859 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 19:32:47.045955 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.189) 0:00:38.713 ************ 2025-05-19 19:32:47.203319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}})  2025-05-19 19:32:47.203903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5646b4ad-081a-5fe7-ab17-c0ecc5756623'}})  2025-05-19 19:32:47.204682 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:47.205353 | orchestrator | 2025-05-19 19:32:47.205848 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 19:32:47.206478 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.159) 0:00:38.873 ************ 2025-05-19 19:32:47.376370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}})  2025-05-19 19:32:47.376595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5646b4ad-081a-5fe7-ab17-c0ecc5756623'}})  2025-05-19 19:32:47.378311 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:47.378594 | orchestrator | 2025-05-19 19:32:47.379779 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 19:32:47.379802 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.172) 0:00:39.046 ************ 2025-05-19 19:32:47.536576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}})  2025-05-19 19:32:47.536823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5646b4ad-081a-5fe7-ab17-c0ecc5756623'}})  2025-05-19 19:32:47.537690 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:47.539917 | orchestrator | 2025-05-19 19:32:47.539944 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 19:32:47.539958 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.158) 0:00:39.204 ************ 2025-05-19 19:32:47.682572 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:32:47.683097 | orchestrator | 2025-05-19 19:32:47.684577 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 19:32:47.685264 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.147) 0:00:39.352 ************ 2025-05-19 19:32:47.826550 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:32:47.826828 | orchestrator | 2025-05-19 19:32:47.829751 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 19:32:47.829871 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.142) 0:00:39.495 ************ 2025-05-19 19:32:47.962787 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:47.962966 | orchestrator | 2025-05-19 19:32:47.963611 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 19:32:47.964450 | orchestrator | Monday 19 May 2025 19:32:47 +0000 (0:00:00.137) 0:00:39.632 ************ 2025-05-19 19:32:48.095482 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:48.095665 | orchestrator | 2025-05-19 19:32:48.096187 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 19:32:48.096838 | orchestrator | Monday 19 May 2025 19:32:48 +0000 (0:00:00.132) 0:00:39.765 ************ 2025-05-19 19:32:48.233784 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:48.234491 | orchestrator | 2025-05-19 19:32:48.237567 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 19:32:48.237918 | orchestrator | Monday 19 May 2025 19:32:48 +0000 (0:00:00.137) 0:00:39.902 ************ 2025-05-19 19:32:48.576564 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:32:48.576788 | orchestrator |  "ceph_osd_devices": { 2025-05-19 19:32:48.579200 | orchestrator |  "sdb": { 2025-05-19 19:32:48.580039 | orchestrator |  "osd_lvm_uuid": "f4656c6e-aa1c-5ab7-9900-7160e6354d4d" 2025-05-19 19:32:48.580711 | orchestrator |  }, 2025-05-19 19:32:48.581584 | orchestrator |  "sdc": { 2025-05-19 19:32:48.582761 | orchestrator |  "osd_lvm_uuid": "5646b4ad-081a-5fe7-ab17-c0ecc5756623" 2025-05-19 19:32:48.584984 | orchestrator |  } 2025-05-19 19:32:48.585107 | orchestrator |  } 2025-05-19 19:32:48.585880 | orchestrator | } 2025-05-19 19:32:48.586697 | orchestrator | 2025-05-19 19:32:48.587244 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 19:32:48.587489 | orchestrator | Monday 19 May 2025 19:32:48 +0000 (0:00:00.342) 0:00:40.244 ************ 2025-05-19 19:32:48.714287 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:48.714702 | orchestrator | 2025-05-19 19:32:48.715447 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 19:32:48.716381 | orchestrator | Monday 19 May 2025 19:32:48 +0000 (0:00:00.139) 0:00:40.384 ************ 2025-05-19 19:32:48.851473 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:48.851648 | orchestrator | 2025-05-19 19:32:48.854106 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 19:32:48.854338 | orchestrator | Monday 19 May 2025 19:32:48 +0000 (0:00:00.135) 0:00:40.519 ************ 2025-05-19 19:32:48.986987 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:32:48.987220 | orchestrator | 2025-05-19 19:32:48.988623 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 19:32:48.989468 | orchestrator | Monday 19 May 2025 19:32:48 +0000 (0:00:00.136) 0:00:40.656 ************ 2025-05-19 19:32:49.265647 | orchestrator | changed: [testbed-node-5] => { 2025-05-19 19:32:49.266298 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 19:32:49.267262 | orchestrator |  "ceph_osd_devices": { 2025-05-19 19:32:49.269631 | orchestrator |  "sdb": { 2025-05-19 19:32:49.269670 | orchestrator |  "osd_lvm_uuid": "f4656c6e-aa1c-5ab7-9900-7160e6354d4d" 2025-05-19 19:32:49.269684 | orchestrator |  }, 2025-05-19 19:32:49.270662 | orchestrator |  "sdc": { 2025-05-19 19:32:49.271442 | orchestrator |  "osd_lvm_uuid": "5646b4ad-081a-5fe7-ab17-c0ecc5756623" 2025-05-19 19:32:49.272103 | orchestrator |  } 2025-05-19 19:32:49.272595 | orchestrator |  }, 2025-05-19 19:32:49.273530 | orchestrator |  "lvm_volumes": [ 2025-05-19 19:32:49.273731 | orchestrator |  { 2025-05-19 19:32:49.274466 | orchestrator |  "data": "osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d", 2025-05-19 19:32:49.274847 | orchestrator |  "data_vg": "ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d" 2025-05-19 19:32:49.276558 | orchestrator |  }, 2025-05-19 19:32:49.276584 | orchestrator |  { 2025-05-19 19:32:49.276596 | orchestrator |  "data": "osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623", 2025-05-19 19:32:49.277114 | orchestrator |  "data_vg": "ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623" 2025-05-19 19:32:49.277311 | orchestrator |  } 2025-05-19 19:32:49.277987 | orchestrator |  ] 2025-05-19 19:32:49.278164 | orchestrator |  } 2025-05-19 19:32:49.278620 | orchestrator | } 2025-05-19 19:32:49.278838 | orchestrator | 2025-05-19 19:32:49.279385 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 19:32:49.279656 | orchestrator | Monday 19 May 2025 19:32:49 +0000 (0:00:00.278) 0:00:40.934 ************ 2025-05-19 19:32:50.344649 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 19:32:50.344757 | orchestrator | 2025-05-19 19:32:50.348684 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:32:50.348927 | orchestrator | 2025-05-19 19:32:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:32:50.349048 | orchestrator | 2025-05-19 19:32:50 | INFO  | Please wait and do not abort execution. 2025-05-19 19:32:50.350564 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 19:32:50.351253 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 19:32:50.352053 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 19:32:50.352846 | orchestrator | 2025-05-19 19:32:50.353692 | orchestrator | 2025-05-19 19:32:50.354132 | orchestrator | 2025-05-19 19:32:50.354843 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:32:50.355367 | orchestrator | Monday 19 May 2025 19:32:50 +0000 (0:00:01.077) 0:00:42.012 ************ 2025-05-19 19:32:50.355865 | orchestrator | =============================================================================== 2025-05-19 19:32:50.356352 | orchestrator | Write configuration file ------------------------------------------------ 4.09s 2025-05-19 19:32:50.357067 | orchestrator | Add known partitions to the list of available block devices ------------- 1.70s 2025-05-19 19:32:50.357387 | orchestrator | Add known links to the list of available block devices ------------------ 1.63s 2025-05-19 19:32:50.358314 | orchestrator | Print configuration data ------------------------------------------------ 0.98s 2025-05-19 19:32:50.359424 | orchestrator | Add known links to the list of available block devices ------------------ 0.92s 2025-05-19 19:32:50.360410 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-19 19:32:50.360662 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-05-19 19:32:50.362132 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.78s 2025-05-19 19:32:50.363018 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-05-19 19:32:50.363107 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-19 19:32:50.363118 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.67s 2025-05-19 19:32:50.363840 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-19 19:32:50.364214 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-05-19 19:32:50.364589 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-19 19:32:50.365480 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2025-05-19 19:32:50.366103 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-05-19 19:32:50.366367 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-19 19:32:50.366403 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-19 19:32:50.367009 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.59s 2025-05-19 19:32:50.367172 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2025-05-19 19:33:02.383824 | orchestrator | 2025-05-19 19:33:02 | INFO  | Task fa8ca574-6490-4a34-a58b-279f3d2e1d18 is running in background. Output coming soon. 2025-05-19 19:33:36.864797 | orchestrator | 2025-05-19 19:33:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-19 19:33:36.864905 | orchestrator | 2025-05-19 19:33:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-19 19:33:36.864937 | orchestrator | 2025-05-19 19:33:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-19 19:33:36.864950 | orchestrator | 2025-05-19 19:33:29 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-19 19:33:36.864963 | orchestrator | 2025-05-19 19:33:36 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-19 19:33:36.864974 | orchestrator | [master bf38698] 2025-05-19-19-33 2025-05-19 19:33:36.864987 | orchestrator | 1 file changed, 42 insertions(+) 2025-05-19 19:33:38.449104 | orchestrator | 2025-05-19 19:33:38 | INFO  | Task 604e0f29-8730-4838-99f8-3e543b49c5f2 (ceph-create-lvm-devices) was prepared for execution. 2025-05-19 19:33:38.449219 | orchestrator | 2025-05-19 19:33:38 | INFO  | It takes a moment until task 604e0f29-8730-4838-99f8-3e543b49c5f2 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-19 19:33:41.462931 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-19 19:33:41.972368 | orchestrator | 2025-05-19 19:33:41.973964 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 19:33:41.974347 | orchestrator | 2025-05-19 19:33:41.974995 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 19:33:41.975249 | orchestrator | Monday 19 May 2025 19:33:41 +0000 (0:00:00.443) 0:00:00.443 ************ 2025-05-19 19:33:42.212493 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 19:33:42.213016 | orchestrator | 2025-05-19 19:33:42.214280 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 19:33:42.214525 | orchestrator | Monday 19 May 2025 19:33:42 +0000 (0:00:00.243) 0:00:00.686 ************ 2025-05-19 19:33:42.433914 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:33:42.434294 | orchestrator | 2025-05-19 19:33:42.434934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:42.435160 | orchestrator | Monday 19 May 2025 19:33:42 +0000 (0:00:00.221) 0:00:00.908 ************ 2025-05-19 19:33:43.119980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-19 19:33:43.120112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-19 19:33:43.120127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-19 19:33:43.120431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-19 19:33:43.120477 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-19 19:33:43.121228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-19 19:33:43.123682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-19 19:33:43.123727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-19 19:33:43.127088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-19 19:33:43.127928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-19 19:33:43.128214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-19 19:33:43.130180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-19 19:33:43.130541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-19 19:33:43.130852 | orchestrator | 2025-05-19 19:33:43.131178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:43.131500 | orchestrator | Monday 19 May 2025 19:33:43 +0000 (0:00:00.684) 0:00:01.593 ************ 2025-05-19 19:33:43.315868 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:43.316054 | orchestrator | 2025-05-19 19:33:43.316743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:43.319423 | orchestrator | Monday 19 May 2025 19:33:43 +0000 (0:00:00.196) 0:00:01.789 ************ 2025-05-19 19:33:43.526118 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:43.526521 | orchestrator | 2025-05-19 19:33:43.526848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:43.527510 | orchestrator | Monday 19 May 2025 19:33:43 +0000 (0:00:00.210) 0:00:01.999 ************ 2025-05-19 19:33:43.718364 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:43.718565 | orchestrator | 2025-05-19 19:33:43.722570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:43.722726 | orchestrator | Monday 19 May 2025 19:33:43 +0000 (0:00:00.192) 0:00:02.192 ************ 2025-05-19 19:33:43.901612 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:43.902415 | orchestrator | 2025-05-19 19:33:43.903156 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:43.903791 | orchestrator | Monday 19 May 2025 19:33:43 +0000 (0:00:00.183) 0:00:02.375 ************ 2025-05-19 19:33:44.115742 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:44.115951 | orchestrator | 2025-05-19 19:33:44.117857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:44.118363 | orchestrator | Monday 19 May 2025 19:33:44 +0000 (0:00:00.213) 0:00:02.589 ************ 2025-05-19 19:33:44.341398 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:44.341588 | orchestrator | 2025-05-19 19:33:44.341864 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:44.342413 | orchestrator | Monday 19 May 2025 19:33:44 +0000 (0:00:00.226) 0:00:02.815 ************ 2025-05-19 19:33:44.547331 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:44.547515 | orchestrator | 2025-05-19 19:33:44.549090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:44.549289 | orchestrator | Monday 19 May 2025 19:33:44 +0000 (0:00:00.204) 0:00:03.020 ************ 2025-05-19 19:33:44.742005 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:44.743293 | orchestrator | 2025-05-19 19:33:44.744336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:44.745071 | orchestrator | Monday 19 May 2025 19:33:44 +0000 (0:00:00.194) 0:00:03.215 ************ 2025-05-19 19:33:45.391689 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4) 2025-05-19 19:33:45.392465 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4) 2025-05-19 19:33:45.393475 | orchestrator | 2025-05-19 19:33:45.394268 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:45.395544 | orchestrator | Monday 19 May 2025 19:33:45 +0000 (0:00:00.649) 0:00:03.864 ************ 2025-05-19 19:33:46.214237 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199) 2025-05-19 19:33:46.214471 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199) 2025-05-19 19:33:46.216258 | orchestrator | 2025-05-19 19:33:46.216284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:46.216297 | orchestrator | Monday 19 May 2025 19:33:46 +0000 (0:00:00.822) 0:00:04.687 ************ 2025-05-19 19:33:46.651552 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2) 2025-05-19 19:33:46.651790 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2) 2025-05-19 19:33:46.651811 | orchestrator | 2025-05-19 19:33:46.652254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:46.652581 | orchestrator | Monday 19 May 2025 19:33:46 +0000 (0:00:00.439) 0:00:05.126 ************ 2025-05-19 19:33:47.127253 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53) 2025-05-19 19:33:47.127950 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53) 2025-05-19 19:33:47.128034 | orchestrator | 2025-05-19 19:33:47.129293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:33:47.129572 | orchestrator | Monday 19 May 2025 19:33:47 +0000 (0:00:00.472) 0:00:05.599 ************ 2025-05-19 19:33:47.442819 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 19:33:47.442948 | orchestrator | 2025-05-19 19:33:47.443053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:47.443173 | orchestrator | Monday 19 May 2025 19:33:47 +0000 (0:00:00.318) 0:00:05.917 ************ 2025-05-19 19:33:47.913393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-19 19:33:47.914568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-19 19:33:47.915591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-19 19:33:47.916380 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-19 19:33:47.918765 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-19 19:33:47.918783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-19 19:33:47.918793 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-19 19:33:47.918803 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-19 19:33:47.919052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-19 19:33:47.919511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-19 19:33:47.919910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-19 19:33:47.920424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-19 19:33:47.920636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-19 19:33:47.921070 | orchestrator | 2025-05-19 19:33:47.921502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:47.921908 | orchestrator | Monday 19 May 2025 19:33:47 +0000 (0:00:00.468) 0:00:06.386 ************ 2025-05-19 19:33:48.103102 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:48.104719 | orchestrator | 2025-05-19 19:33:48.105572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:48.105647 | orchestrator | Monday 19 May 2025 19:33:48 +0000 (0:00:00.191) 0:00:06.577 ************ 2025-05-19 19:33:48.306208 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:48.307652 | orchestrator | 2025-05-19 19:33:48.310060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:48.311200 | orchestrator | Monday 19 May 2025 19:33:48 +0000 (0:00:00.202) 0:00:06.780 ************ 2025-05-19 19:33:48.502974 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:48.504038 | orchestrator | 2025-05-19 19:33:48.504842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:48.505285 | orchestrator | Monday 19 May 2025 19:33:48 +0000 (0:00:00.196) 0:00:06.976 ************ 2025-05-19 19:33:48.707796 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:48.708817 | orchestrator | 2025-05-19 19:33:48.708970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:48.709997 | orchestrator | Monday 19 May 2025 19:33:48 +0000 (0:00:00.205) 0:00:07.182 ************ 2025-05-19 19:33:49.275239 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:49.275462 | orchestrator | 2025-05-19 19:33:49.276082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:49.278788 | orchestrator | Monday 19 May 2025 19:33:49 +0000 (0:00:00.565) 0:00:07.748 ************ 2025-05-19 19:33:49.488523 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:49.488620 | orchestrator | 2025-05-19 19:33:49.488691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:49.489295 | orchestrator | Monday 19 May 2025 19:33:49 +0000 (0:00:00.213) 0:00:07.961 ************ 2025-05-19 19:33:49.692981 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:49.693095 | orchestrator | 2025-05-19 19:33:49.693530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:49.694062 | orchestrator | Monday 19 May 2025 19:33:49 +0000 (0:00:00.204) 0:00:08.166 ************ 2025-05-19 19:33:49.887517 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:49.887671 | orchestrator | 2025-05-19 19:33:49.887975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:49.888811 | orchestrator | Monday 19 May 2025 19:33:49 +0000 (0:00:00.195) 0:00:08.362 ************ 2025-05-19 19:33:50.505959 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-19 19:33:50.506602 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-19 19:33:50.507200 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-19 19:33:50.507742 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-19 19:33:50.508512 | orchestrator | 2025-05-19 19:33:50.508802 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:50.509505 | orchestrator | Monday 19 May 2025 19:33:50 +0000 (0:00:00.617) 0:00:08.979 ************ 2025-05-19 19:33:50.704621 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:50.704930 | orchestrator | 2025-05-19 19:33:50.705393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:50.706174 | orchestrator | Monday 19 May 2025 19:33:50 +0000 (0:00:00.199) 0:00:09.178 ************ 2025-05-19 19:33:50.905292 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:50.905871 | orchestrator | 2025-05-19 19:33:50.906913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:50.909378 | orchestrator | Monday 19 May 2025 19:33:50 +0000 (0:00:00.201) 0:00:09.379 ************ 2025-05-19 19:33:51.097319 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:51.098063 | orchestrator | 2025-05-19 19:33:51.099260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:33:51.100178 | orchestrator | Monday 19 May 2025 19:33:51 +0000 (0:00:00.190) 0:00:09.570 ************ 2025-05-19 19:33:51.286291 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:51.286485 | orchestrator | 2025-05-19 19:33:51.286984 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 19:33:51.287475 | orchestrator | Monday 19 May 2025 19:33:51 +0000 (0:00:00.189) 0:00:09.760 ************ 2025-05-19 19:33:51.424053 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:51.425507 | orchestrator | 2025-05-19 19:33:51.425710 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 19:33:51.426955 | orchestrator | Monday 19 May 2025 19:33:51 +0000 (0:00:00.136) 0:00:09.896 ************ 2025-05-19 19:33:51.634279 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6eb1ee5c-85e6-559d-849b-4772bddae6d6'}}) 2025-05-19 19:33:51.634390 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '702b6aa6-b3de-5669-bdb1-4e94528c6268'}}) 2025-05-19 19:33:51.634405 | orchestrator | 2025-05-19 19:33:51.634627 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 19:33:51.635316 | orchestrator | Monday 19 May 2025 19:33:51 +0000 (0:00:00.210) 0:00:10.106 ************ 2025-05-19 19:33:53.824048 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'}) 2025-05-19 19:33:53.824547 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'}) 2025-05-19 19:33:53.825437 | orchestrator | 2025-05-19 19:33:53.826263 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 19:33:53.827950 | orchestrator | Monday 19 May 2025 19:33:53 +0000 (0:00:02.188) 0:00:12.295 ************ 2025-05-19 19:33:53.986114 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:53.987063 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:53.988130 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:53.988991 | orchestrator | 2025-05-19 19:33:53.989953 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 19:33:53.992219 | orchestrator | Monday 19 May 2025 19:33:53 +0000 (0:00:00.165) 0:00:12.460 ************ 2025-05-19 19:33:55.448117 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'}) 2025-05-19 19:33:55.448670 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'}) 2025-05-19 19:33:55.449546 | orchestrator | 2025-05-19 19:33:55.449814 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 19:33:55.450417 | orchestrator | Monday 19 May 2025 19:33:55 +0000 (0:00:01.459) 0:00:13.919 ************ 2025-05-19 19:33:55.603959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:55.604171 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:55.604604 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:55.605991 | orchestrator | 2025-05-19 19:33:55.606391 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 19:33:55.607241 | orchestrator | Monday 19 May 2025 19:33:55 +0000 (0:00:00.158) 0:00:14.078 ************ 2025-05-19 19:33:55.736490 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:55.736669 | orchestrator | 2025-05-19 19:33:55.737129 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 19:33:55.738572 | orchestrator | Monday 19 May 2025 19:33:55 +0000 (0:00:00.131) 0:00:14.210 ************ 2025-05-19 19:33:55.900083 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:55.900467 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:55.901250 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:55.902118 | orchestrator | 2025-05-19 19:33:55.903342 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 19:33:55.903530 | orchestrator | Monday 19 May 2025 19:33:55 +0000 (0:00:00.162) 0:00:14.373 ************ 2025-05-19 19:33:56.035910 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:56.036417 | orchestrator | 2025-05-19 19:33:56.037827 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 19:33:56.039124 | orchestrator | Monday 19 May 2025 19:33:56 +0000 (0:00:00.137) 0:00:14.510 ************ 2025-05-19 19:33:56.211590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:56.212266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:56.213032 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:56.214257 | orchestrator | 2025-05-19 19:33:56.214871 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 19:33:56.215800 | orchestrator | Monday 19 May 2025 19:33:56 +0000 (0:00:00.175) 0:00:14.685 ************ 2025-05-19 19:33:56.504375 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:56.505336 | orchestrator | 2025-05-19 19:33:56.505823 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 19:33:56.506878 | orchestrator | Monday 19 May 2025 19:33:56 +0000 (0:00:00.293) 0:00:14.978 ************ 2025-05-19 19:33:56.672022 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:56.672791 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:56.673170 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:56.673733 | orchestrator | 2025-05-19 19:33:56.673937 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 19:33:56.674489 | orchestrator | Monday 19 May 2025 19:33:56 +0000 (0:00:00.168) 0:00:15.147 ************ 2025-05-19 19:33:56.813774 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:33:56.814493 | orchestrator | 2025-05-19 19:33:56.814571 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 19:33:56.814973 | orchestrator | Monday 19 May 2025 19:33:56 +0000 (0:00:00.141) 0:00:15.288 ************ 2025-05-19 19:33:56.977240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:56.978675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:56.979714 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:56.980535 | orchestrator | 2025-05-19 19:33:56.981610 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 19:33:56.982303 | orchestrator | Monday 19 May 2025 19:33:56 +0000 (0:00:00.159) 0:00:15.448 ************ 2025-05-19 19:33:57.142681 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:57.142879 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:57.143236 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:57.144223 | orchestrator | 2025-05-19 19:33:57.144717 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 19:33:57.146894 | orchestrator | Monday 19 May 2025 19:33:57 +0000 (0:00:00.168) 0:00:15.617 ************ 2025-05-19 19:33:57.305615 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:33:57.305837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:33:57.306880 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:57.307473 | orchestrator | 2025-05-19 19:33:57.309572 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 19:33:57.309614 | orchestrator | Monday 19 May 2025 19:33:57 +0000 (0:00:00.162) 0:00:15.779 ************ 2025-05-19 19:33:57.461670 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:57.461865 | orchestrator | 2025-05-19 19:33:57.462818 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 19:33:57.463722 | orchestrator | Monday 19 May 2025 19:33:57 +0000 (0:00:00.154) 0:00:15.934 ************ 2025-05-19 19:33:57.601618 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:57.602500 | orchestrator | 2025-05-19 19:33:57.603562 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 19:33:57.604857 | orchestrator | Monday 19 May 2025 19:33:57 +0000 (0:00:00.140) 0:00:16.075 ************ 2025-05-19 19:33:57.735837 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:33:57.736246 | orchestrator | 2025-05-19 19:33:57.737464 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 19:33:57.738300 | orchestrator | Monday 19 May 2025 19:33:57 +0000 (0:00:00.134) 0:00:16.210 ************ 2025-05-19 19:33:57.881721 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:33:57.881956 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 19:33:57.883067 | orchestrator | } 2025-05-19 19:33:57.884532 | orchestrator | 2025-05-19 19:33:57.885443 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 19:33:57.886238 | orchestrator | Monday 19 May 2025 19:33:57 +0000 (0:00:00.144) 0:00:16.354 ************ 2025-05-19 19:33:58.033599 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:33:58.034633 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 19:33:58.036157 | orchestrator | } 2025-05-19 19:33:58.036825 | orchestrator | 2025-05-19 19:33:58.038263 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 19:33:58.038762 | orchestrator | Monday 19 May 2025 19:33:58 +0000 (0:00:00.152) 0:00:16.507 ************ 2025-05-19 19:33:58.175099 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:33:58.175603 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 19:33:58.176789 | orchestrator | } 2025-05-19 19:33:58.177674 | orchestrator | 2025-05-19 19:33:58.178338 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 19:33:58.179450 | orchestrator | Monday 19 May 2025 19:33:58 +0000 (0:00:00.142) 0:00:16.649 ************ 2025-05-19 19:33:59.060682 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:33:59.061513 | orchestrator | 2025-05-19 19:33:59.062845 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 19:33:59.063595 | orchestrator | Monday 19 May 2025 19:33:59 +0000 (0:00:00.884) 0:00:17.533 ************ 2025-05-19 19:33:59.561641 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:33:59.562441 | orchestrator | 2025-05-19 19:33:59.563193 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 19:33:59.563803 | orchestrator | Monday 19 May 2025 19:33:59 +0000 (0:00:00.501) 0:00:18.035 ************ 2025-05-19 19:34:00.069272 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:34:00.069388 | orchestrator | 2025-05-19 19:34:00.070898 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 19:34:00.071003 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.506) 0:00:18.542 ************ 2025-05-19 19:34:00.214572 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:34:00.215507 | orchestrator | 2025-05-19 19:34:00.216016 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 19:34:00.216978 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.147) 0:00:18.689 ************ 2025-05-19 19:34:00.345098 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:00.345721 | orchestrator | 2025-05-19 19:34:00.346757 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 19:34:00.347159 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.129) 0:00:18.819 ************ 2025-05-19 19:34:00.465103 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:00.465592 | orchestrator | 2025-05-19 19:34:00.466899 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 19:34:00.467594 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.120) 0:00:18.939 ************ 2025-05-19 19:34:00.606991 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:34:00.607165 | orchestrator |  "vgs_report": { 2025-05-19 19:34:00.608309 | orchestrator |  "vg": [] 2025-05-19 19:34:00.608938 | orchestrator |  } 2025-05-19 19:34:00.609827 | orchestrator | } 2025-05-19 19:34:00.610638 | orchestrator | 2025-05-19 19:34:00.611101 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 19:34:00.612545 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.142) 0:00:19.081 ************ 2025-05-19 19:34:00.749886 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:00.753803 | orchestrator | 2025-05-19 19:34:00.753838 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 19:34:00.753853 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.142) 0:00:19.224 ************ 2025-05-19 19:34:00.895219 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:00.895842 | orchestrator | 2025-05-19 19:34:00.897109 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 19:34:00.899290 | orchestrator | Monday 19 May 2025 19:34:00 +0000 (0:00:00.145) 0:00:19.369 ************ 2025-05-19 19:34:01.023452 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:01.023556 | orchestrator | 2025-05-19 19:34:01.023972 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 19:34:01.024683 | orchestrator | Monday 19 May 2025 19:34:01 +0000 (0:00:00.125) 0:00:19.495 ************ 2025-05-19 19:34:01.155435 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:01.155857 | orchestrator | 2025-05-19 19:34:01.156615 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 19:34:01.161495 | orchestrator | Monday 19 May 2025 19:34:01 +0000 (0:00:00.133) 0:00:19.629 ************ 2025-05-19 19:34:01.455707 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:01.455806 | orchestrator | 2025-05-19 19:34:01.458794 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 19:34:01.459313 | orchestrator | Monday 19 May 2025 19:34:01 +0000 (0:00:00.300) 0:00:19.929 ************ 2025-05-19 19:34:01.598640 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:01.599422 | orchestrator | 2025-05-19 19:34:01.600437 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 19:34:01.601700 | orchestrator | Monday 19 May 2025 19:34:01 +0000 (0:00:00.143) 0:00:20.073 ************ 2025-05-19 19:34:01.742725 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:01.743642 | orchestrator | 2025-05-19 19:34:01.743942 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 19:34:01.745039 | orchestrator | Monday 19 May 2025 19:34:01 +0000 (0:00:00.143) 0:00:20.216 ************ 2025-05-19 19:34:01.895514 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:01.896423 | orchestrator | 2025-05-19 19:34:01.897694 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 19:34:01.898255 | orchestrator | Monday 19 May 2025 19:34:01 +0000 (0:00:00.153) 0:00:20.370 ************ 2025-05-19 19:34:02.031068 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.033918 | orchestrator | 2025-05-19 19:34:02.033959 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 19:34:02.033974 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.133) 0:00:20.503 ************ 2025-05-19 19:34:02.168662 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.169748 | orchestrator | 2025-05-19 19:34:02.171281 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 19:34:02.171710 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.138) 0:00:20.642 ************ 2025-05-19 19:34:02.304776 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.306427 | orchestrator | 2025-05-19 19:34:02.307527 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 19:34:02.308358 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.137) 0:00:20.779 ************ 2025-05-19 19:34:02.445724 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.446566 | orchestrator | 2025-05-19 19:34:02.448287 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 19:34:02.448782 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.140) 0:00:20.919 ************ 2025-05-19 19:34:02.594241 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.594442 | orchestrator | 2025-05-19 19:34:02.595114 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 19:34:02.595385 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.148) 0:00:21.068 ************ 2025-05-19 19:34:02.725381 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.726320 | orchestrator | 2025-05-19 19:34:02.726457 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 19:34:02.726914 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.131) 0:00:21.199 ************ 2025-05-19 19:34:02.900922 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:02.901432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:02.902615 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:02.903496 | orchestrator | 2025-05-19 19:34:02.905684 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 19:34:02.905945 | orchestrator | Monday 19 May 2025 19:34:02 +0000 (0:00:00.175) 0:00:21.375 ************ 2025-05-19 19:34:03.067284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:03.067493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:03.068247 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:03.070469 | orchestrator | 2025-05-19 19:34:03.070805 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 19:34:03.071723 | orchestrator | Monday 19 May 2025 19:34:03 +0000 (0:00:00.165) 0:00:21.540 ************ 2025-05-19 19:34:03.428875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:03.429071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:03.430195 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:03.430943 | orchestrator | 2025-05-19 19:34:03.432990 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 19:34:03.433014 | orchestrator | Monday 19 May 2025 19:34:03 +0000 (0:00:00.362) 0:00:21.902 ************ 2025-05-19 19:34:03.597623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:03.598323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:03.599108 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:03.600084 | orchestrator | 2025-05-19 19:34:03.601238 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 19:34:03.603221 | orchestrator | Monday 19 May 2025 19:34:03 +0000 (0:00:00.168) 0:00:22.070 ************ 2025-05-19 19:34:03.767952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:03.768051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:03.768114 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:03.768160 | orchestrator | 2025-05-19 19:34:03.768278 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 19:34:03.769495 | orchestrator | Monday 19 May 2025 19:34:03 +0000 (0:00:00.169) 0:00:22.240 ************ 2025-05-19 19:34:03.923196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:03.924202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:03.924920 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:03.925901 | orchestrator | 2025-05-19 19:34:03.926754 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 19:34:03.927419 | orchestrator | Monday 19 May 2025 19:34:03 +0000 (0:00:00.156) 0:00:22.396 ************ 2025-05-19 19:34:04.107079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:04.107321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:04.109890 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:04.110788 | orchestrator | 2025-05-19 19:34:04.111917 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 19:34:04.112398 | orchestrator | Monday 19 May 2025 19:34:04 +0000 (0:00:00.182) 0:00:22.579 ************ 2025-05-19 19:34:04.282675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:04.282784 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:04.285030 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:04.285057 | orchestrator | 2025-05-19 19:34:04.285577 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 19:34:04.287770 | orchestrator | Monday 19 May 2025 19:34:04 +0000 (0:00:00.177) 0:00:22.757 ************ 2025-05-19 19:34:04.799917 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:34:04.800094 | orchestrator | 2025-05-19 19:34:04.800764 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 19:34:04.801236 | orchestrator | Monday 19 May 2025 19:34:04 +0000 (0:00:00.517) 0:00:23.274 ************ 2025-05-19 19:34:05.325273 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:34:05.327126 | orchestrator | 2025-05-19 19:34:05.327183 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 19:34:05.327839 | orchestrator | Monday 19 May 2025 19:34:05 +0000 (0:00:00.522) 0:00:23.797 ************ 2025-05-19 19:34:05.474995 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:34:05.476710 | orchestrator | 2025-05-19 19:34:05.477418 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 19:34:05.478523 | orchestrator | Monday 19 May 2025 19:34:05 +0000 (0:00:00.152) 0:00:23.950 ************ 2025-05-19 19:34:05.661990 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'vg_name': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'}) 2025-05-19 19:34:05.662190 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'vg_name': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'}) 2025-05-19 19:34:05.662206 | orchestrator | 2025-05-19 19:34:05.662533 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 19:34:05.663203 | orchestrator | Monday 19 May 2025 19:34:05 +0000 (0:00:00.185) 0:00:24.135 ************ 2025-05-19 19:34:06.092129 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:06.092726 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:06.093343 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:06.094778 | orchestrator | 2025-05-19 19:34:06.095362 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 19:34:06.096409 | orchestrator | Monday 19 May 2025 19:34:06 +0000 (0:00:00.429) 0:00:24.565 ************ 2025-05-19 19:34:06.267585 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:06.267663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:06.268576 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:06.269299 | orchestrator | 2025-05-19 19:34:06.270069 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 19:34:06.270847 | orchestrator | Monday 19 May 2025 19:34:06 +0000 (0:00:00.176) 0:00:24.741 ************ 2025-05-19 19:34:06.439198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'})  2025-05-19 19:34:06.440090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'})  2025-05-19 19:34:06.441683 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:34:06.443446 | orchestrator | 2025-05-19 19:34:06.443477 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 19:34:06.443931 | orchestrator | Monday 19 May 2025 19:34:06 +0000 (0:00:00.171) 0:00:24.913 ************ 2025-05-19 19:34:07.127471 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:34:07.127714 | orchestrator |  "lvm_report": { 2025-05-19 19:34:07.128393 | orchestrator |  "lv": [ 2025-05-19 19:34:07.128925 | orchestrator |  { 2025-05-19 19:34:07.129521 | orchestrator |  "lv_name": "osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6", 2025-05-19 19:34:07.130180 | orchestrator |  "vg_name": "ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6" 2025-05-19 19:34:07.130626 | orchestrator |  }, 2025-05-19 19:34:07.131045 | orchestrator |  { 2025-05-19 19:34:07.131513 | orchestrator |  "lv_name": "osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268", 2025-05-19 19:34:07.132068 | orchestrator |  "vg_name": "ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268" 2025-05-19 19:34:07.132780 | orchestrator |  } 2025-05-19 19:34:07.133331 | orchestrator |  ], 2025-05-19 19:34:07.133844 | orchestrator |  "pv": [ 2025-05-19 19:34:07.134479 | orchestrator |  { 2025-05-19 19:34:07.134855 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 19:34:07.135609 | orchestrator |  "vg_name": "ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6" 2025-05-19 19:34:07.136158 | orchestrator |  }, 2025-05-19 19:34:07.136841 | orchestrator |  { 2025-05-19 19:34:07.137161 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 19:34:07.137546 | orchestrator |  "vg_name": "ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268" 2025-05-19 19:34:07.138114 | orchestrator |  } 2025-05-19 19:34:07.138482 | orchestrator |  ] 2025-05-19 19:34:07.138935 | orchestrator |  } 2025-05-19 19:34:07.139443 | orchestrator | } 2025-05-19 19:34:07.140295 | orchestrator | 2025-05-19 19:34:07.140419 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 19:34:07.140870 | orchestrator | 2025-05-19 19:34:07.141647 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 19:34:07.142152 | orchestrator | Monday 19 May 2025 19:34:07 +0000 (0:00:00.687) 0:00:25.601 ************ 2025-05-19 19:34:07.702210 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 19:34:07.702699 | orchestrator | 2025-05-19 19:34:07.703079 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 19:34:07.703897 | orchestrator | Monday 19 May 2025 19:34:07 +0000 (0:00:00.575) 0:00:26.177 ************ 2025-05-19 19:34:07.943591 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:07.943814 | orchestrator | 2025-05-19 19:34:07.944656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:07.945759 | orchestrator | Monday 19 May 2025 19:34:07 +0000 (0:00:00.239) 0:00:26.416 ************ 2025-05-19 19:34:08.383803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-19 19:34:08.384919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-19 19:34:08.386591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-19 19:34:08.387487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-19 19:34:08.387554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-19 19:34:08.388420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-19 19:34:08.388705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-19 19:34:08.389590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-19 19:34:08.390302 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-19 19:34:08.390513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-19 19:34:08.390730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-19 19:34:08.391256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-19 19:34:08.391660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-19 19:34:08.392006 | orchestrator | 2025-05-19 19:34:08.392313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:08.392568 | orchestrator | Monday 19 May 2025 19:34:08 +0000 (0:00:00.440) 0:00:26.856 ************ 2025-05-19 19:34:08.574786 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:08.575098 | orchestrator | 2025-05-19 19:34:08.575997 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:08.576693 | orchestrator | Monday 19 May 2025 19:34:08 +0000 (0:00:00.191) 0:00:27.048 ************ 2025-05-19 19:34:08.780564 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:08.781662 | orchestrator | 2025-05-19 19:34:08.782119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:08.782613 | orchestrator | Monday 19 May 2025 19:34:08 +0000 (0:00:00.206) 0:00:27.255 ************ 2025-05-19 19:34:08.971631 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:08.971735 | orchestrator | 2025-05-19 19:34:08.971834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:08.972368 | orchestrator | Monday 19 May 2025 19:34:08 +0000 (0:00:00.190) 0:00:27.445 ************ 2025-05-19 19:34:09.168669 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:09.174420 | orchestrator | 2025-05-19 19:34:09.174738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:09.175856 | orchestrator | Monday 19 May 2025 19:34:09 +0000 (0:00:00.195) 0:00:27.641 ************ 2025-05-19 19:34:09.375101 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:09.375325 | orchestrator | 2025-05-19 19:34:09.375517 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:09.376262 | orchestrator | Monday 19 May 2025 19:34:09 +0000 (0:00:00.206) 0:00:27.848 ************ 2025-05-19 19:34:09.566861 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:09.566972 | orchestrator | 2025-05-19 19:34:09.567628 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:09.568448 | orchestrator | Monday 19 May 2025 19:34:09 +0000 (0:00:00.192) 0:00:28.041 ************ 2025-05-19 19:34:09.761780 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:09.761911 | orchestrator | 2025-05-19 19:34:09.761931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:09.762072 | orchestrator | Monday 19 May 2025 19:34:09 +0000 (0:00:00.195) 0:00:28.236 ************ 2025-05-19 19:34:10.151857 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:10.152512 | orchestrator | 2025-05-19 19:34:10.154718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:10.155254 | orchestrator | Monday 19 May 2025 19:34:10 +0000 (0:00:00.388) 0:00:28.624 ************ 2025-05-19 19:34:10.561512 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677) 2025-05-19 19:34:10.561832 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677) 2025-05-19 19:34:10.562304 | orchestrator | 2025-05-19 19:34:10.562867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:10.563552 | orchestrator | Monday 19 May 2025 19:34:10 +0000 (0:00:00.411) 0:00:29.036 ************ 2025-05-19 19:34:11.002346 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3) 2025-05-19 19:34:11.003100 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3) 2025-05-19 19:34:11.003976 | orchestrator | 2025-05-19 19:34:11.005741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:11.006302 | orchestrator | Monday 19 May 2025 19:34:10 +0000 (0:00:00.439) 0:00:29.475 ************ 2025-05-19 19:34:11.456910 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3) 2025-05-19 19:34:11.457349 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3) 2025-05-19 19:34:11.457460 | orchestrator | 2025-05-19 19:34:11.458385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:11.458756 | orchestrator | Monday 19 May 2025 19:34:11 +0000 (0:00:00.450) 0:00:29.926 ************ 2025-05-19 19:34:11.884113 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d) 2025-05-19 19:34:11.884548 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d) 2025-05-19 19:34:11.885737 | orchestrator | 2025-05-19 19:34:11.887275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:11.887661 | orchestrator | Monday 19 May 2025 19:34:11 +0000 (0:00:00.430) 0:00:30.357 ************ 2025-05-19 19:34:12.209880 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 19:34:12.210376 | orchestrator | 2025-05-19 19:34:12.210425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:12.210641 | orchestrator | Monday 19 May 2025 19:34:12 +0000 (0:00:00.326) 0:00:30.684 ************ 2025-05-19 19:34:12.689832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-19 19:34:12.690261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-19 19:34:12.690657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-19 19:34:12.691956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-19 19:34:12.692529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-19 19:34:12.693741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-19 19:34:12.694322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-19 19:34:12.695067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-19 19:34:12.695623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-19 19:34:12.696243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-19 19:34:12.696337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-19 19:34:12.697282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-19 19:34:12.697547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-19 19:34:12.697975 | orchestrator | 2025-05-19 19:34:12.698477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:12.699520 | orchestrator | Monday 19 May 2025 19:34:12 +0000 (0:00:00.480) 0:00:31.164 ************ 2025-05-19 19:34:12.893703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:12.894258 | orchestrator | 2025-05-19 19:34:12.895550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:12.897945 | orchestrator | Monday 19 May 2025 19:34:12 +0000 (0:00:00.203) 0:00:31.368 ************ 2025-05-19 19:34:13.090855 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:13.091801 | orchestrator | 2025-05-19 19:34:13.095350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:13.098161 | orchestrator | Monday 19 May 2025 19:34:13 +0000 (0:00:00.194) 0:00:31.562 ************ 2025-05-19 19:34:13.697279 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:13.697384 | orchestrator | 2025-05-19 19:34:13.697652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:13.699559 | orchestrator | Monday 19 May 2025 19:34:13 +0000 (0:00:00.607) 0:00:32.170 ************ 2025-05-19 19:34:13.897061 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:13.897320 | orchestrator | 2025-05-19 19:34:13.898352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:13.901330 | orchestrator | Monday 19 May 2025 19:34:13 +0000 (0:00:00.201) 0:00:32.371 ************ 2025-05-19 19:34:14.099386 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:14.099523 | orchestrator | 2025-05-19 19:34:14.099828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:14.100099 | orchestrator | Monday 19 May 2025 19:34:14 +0000 (0:00:00.202) 0:00:32.574 ************ 2025-05-19 19:34:14.298613 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:14.298720 | orchestrator | 2025-05-19 19:34:14.299079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:14.299761 | orchestrator | Monday 19 May 2025 19:34:14 +0000 (0:00:00.196) 0:00:32.771 ************ 2025-05-19 19:34:14.508847 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:14.508957 | orchestrator | 2025-05-19 19:34:14.508974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:14.509062 | orchestrator | Monday 19 May 2025 19:34:14 +0000 (0:00:00.210) 0:00:32.981 ************ 2025-05-19 19:34:14.714843 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:14.716743 | orchestrator | 2025-05-19 19:34:14.718121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:14.718310 | orchestrator | Monday 19 May 2025 19:34:14 +0000 (0:00:00.205) 0:00:33.187 ************ 2025-05-19 19:34:15.360520 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-19 19:34:15.360609 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-19 19:34:15.363656 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-19 19:34:15.363753 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-19 19:34:15.364951 | orchestrator | 2025-05-19 19:34:15.366188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:15.366952 | orchestrator | Monday 19 May 2025 19:34:15 +0000 (0:00:00.645) 0:00:33.833 ************ 2025-05-19 19:34:15.556307 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:15.556579 | orchestrator | 2025-05-19 19:34:15.557307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:15.557965 | orchestrator | Monday 19 May 2025 19:34:15 +0000 (0:00:00.197) 0:00:34.031 ************ 2025-05-19 19:34:15.761635 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:15.762540 | orchestrator | 2025-05-19 19:34:15.763531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:15.764527 | orchestrator | Monday 19 May 2025 19:34:15 +0000 (0:00:00.205) 0:00:34.236 ************ 2025-05-19 19:34:15.965248 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:15.965750 | orchestrator | 2025-05-19 19:34:15.966484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:15.967015 | orchestrator | Monday 19 May 2025 19:34:15 +0000 (0:00:00.202) 0:00:34.439 ************ 2025-05-19 19:34:16.626782 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:16.626951 | orchestrator | 2025-05-19 19:34:16.627652 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 19:34:16.628381 | orchestrator | Monday 19 May 2025 19:34:16 +0000 (0:00:00.660) 0:00:35.099 ************ 2025-05-19 19:34:16.760998 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:16.761814 | orchestrator | 2025-05-19 19:34:16.762445 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 19:34:16.762988 | orchestrator | Monday 19 May 2025 19:34:16 +0000 (0:00:00.136) 0:00:35.236 ************ 2025-05-19 19:34:16.995474 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}}) 2025-05-19 19:34:16.996551 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5fdf60fa-c839-55c0-9693-b393079e2a5b'}}) 2025-05-19 19:34:16.996900 | orchestrator | 2025-05-19 19:34:16.997337 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 19:34:16.999331 | orchestrator | Monday 19 May 2025 19:34:16 +0000 (0:00:00.233) 0:00:35.470 ************ 2025-05-19 19:34:18.796394 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}) 2025-05-19 19:34:18.796619 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'}) 2025-05-19 19:34:18.799980 | orchestrator | 2025-05-19 19:34:18.800766 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 19:34:18.801598 | orchestrator | Monday 19 May 2025 19:34:18 +0000 (0:00:01.797) 0:00:37.267 ************ 2025-05-19 19:34:18.956074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:18.956209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:18.956811 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:18.957385 | orchestrator | 2025-05-19 19:34:18.958099 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 19:34:18.958625 | orchestrator | Monday 19 May 2025 19:34:18 +0000 (0:00:00.162) 0:00:37.430 ************ 2025-05-19 19:34:20.303902 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}) 2025-05-19 19:34:20.304040 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'}) 2025-05-19 19:34:20.304805 | orchestrator | 2025-05-19 19:34:20.306578 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 19:34:20.306730 | orchestrator | Monday 19 May 2025 19:34:20 +0000 (0:00:01.346) 0:00:38.776 ************ 2025-05-19 19:34:20.470125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:20.471078 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:20.472224 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:20.472815 | orchestrator | 2025-05-19 19:34:20.474200 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 19:34:20.474465 | orchestrator | Monday 19 May 2025 19:34:20 +0000 (0:00:00.168) 0:00:38.944 ************ 2025-05-19 19:34:20.612197 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:20.615936 | orchestrator | 2025-05-19 19:34:20.618172 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 19:34:20.619744 | orchestrator | Monday 19 May 2025 19:34:20 +0000 (0:00:00.142) 0:00:39.086 ************ 2025-05-19 19:34:20.774083 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:20.774565 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:20.775527 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:20.776412 | orchestrator | 2025-05-19 19:34:20.777254 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 19:34:20.778168 | orchestrator | Monday 19 May 2025 19:34:20 +0000 (0:00:00.161) 0:00:39.248 ************ 2025-05-19 19:34:21.090406 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:21.090898 | orchestrator | 2025-05-19 19:34:21.091320 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 19:34:21.092601 | orchestrator | Monday 19 May 2025 19:34:21 +0000 (0:00:00.315) 0:00:39.563 ************ 2025-05-19 19:34:21.259365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:21.259820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:21.261329 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:21.262372 | orchestrator | 2025-05-19 19:34:21.263386 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 19:34:21.264452 | orchestrator | Monday 19 May 2025 19:34:21 +0000 (0:00:00.170) 0:00:39.733 ************ 2025-05-19 19:34:21.396910 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:21.397425 | orchestrator | 2025-05-19 19:34:21.399054 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 19:34:21.399892 | orchestrator | Monday 19 May 2025 19:34:21 +0000 (0:00:00.137) 0:00:39.871 ************ 2025-05-19 19:34:21.560471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:21.561225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:21.563068 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:21.563505 | orchestrator | 2025-05-19 19:34:21.564557 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 19:34:21.564638 | orchestrator | Monday 19 May 2025 19:34:21 +0000 (0:00:00.161) 0:00:40.032 ************ 2025-05-19 19:34:21.690587 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:21.690798 | orchestrator | 2025-05-19 19:34:21.691314 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 19:34:21.692548 | orchestrator | Monday 19 May 2025 19:34:21 +0000 (0:00:00.132) 0:00:40.164 ************ 2025-05-19 19:34:21.855462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:21.855655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:21.856210 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:21.856973 | orchestrator | 2025-05-19 19:34:21.858497 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 19:34:21.858537 | orchestrator | Monday 19 May 2025 19:34:21 +0000 (0:00:00.163) 0:00:40.328 ************ 2025-05-19 19:34:22.024412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:22.025847 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:22.026238 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:22.028799 | orchestrator | 2025-05-19 19:34:22.028837 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 19:34:22.028852 | orchestrator | Monday 19 May 2025 19:34:22 +0000 (0:00:00.170) 0:00:40.498 ************ 2025-05-19 19:34:22.182449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:22.183075 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:22.183221 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:22.183822 | orchestrator | 2025-05-19 19:34:22.184506 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 19:34:22.185059 | orchestrator | Monday 19 May 2025 19:34:22 +0000 (0:00:00.158) 0:00:40.657 ************ 2025-05-19 19:34:22.322424 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:22.322625 | orchestrator | 2025-05-19 19:34:22.323287 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 19:34:22.325508 | orchestrator | Monday 19 May 2025 19:34:22 +0000 (0:00:00.138) 0:00:40.795 ************ 2025-05-19 19:34:22.468857 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:22.469034 | orchestrator | 2025-05-19 19:34:22.469644 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 19:34:22.469861 | orchestrator | Monday 19 May 2025 19:34:22 +0000 (0:00:00.147) 0:00:40.943 ************ 2025-05-19 19:34:22.610346 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:22.610451 | orchestrator | 2025-05-19 19:34:22.610945 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 19:34:22.611435 | orchestrator | Monday 19 May 2025 19:34:22 +0000 (0:00:00.140) 0:00:41.084 ************ 2025-05-19 19:34:22.759127 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:34:22.759274 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 19:34:22.759310 | orchestrator | } 2025-05-19 19:34:22.759323 | orchestrator | 2025-05-19 19:34:22.759419 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 19:34:22.760197 | orchestrator | Monday 19 May 2025 19:34:22 +0000 (0:00:00.146) 0:00:41.231 ************ 2025-05-19 19:34:23.099903 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:34:23.100079 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 19:34:23.101354 | orchestrator | } 2025-05-19 19:34:23.103853 | orchestrator | 2025-05-19 19:34:23.103888 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 19:34:23.103903 | orchestrator | Monday 19 May 2025 19:34:23 +0000 (0:00:00.341) 0:00:41.573 ************ 2025-05-19 19:34:23.247086 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:34:23.247852 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 19:34:23.248831 | orchestrator | } 2025-05-19 19:34:23.249647 | orchestrator | 2025-05-19 19:34:23.250489 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 19:34:23.250918 | orchestrator | Monday 19 May 2025 19:34:23 +0000 (0:00:00.148) 0:00:41.721 ************ 2025-05-19 19:34:23.766355 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:23.766770 | orchestrator | 2025-05-19 19:34:23.767604 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 19:34:23.768416 | orchestrator | Monday 19 May 2025 19:34:23 +0000 (0:00:00.518) 0:00:42.240 ************ 2025-05-19 19:34:24.281088 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:24.281317 | orchestrator | 2025-05-19 19:34:24.281468 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 19:34:24.281497 | orchestrator | Monday 19 May 2025 19:34:24 +0000 (0:00:00.513) 0:00:42.753 ************ 2025-05-19 19:34:24.795915 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:24.796401 | orchestrator | 2025-05-19 19:34:24.797318 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 19:34:24.797751 | orchestrator | Monday 19 May 2025 19:34:24 +0000 (0:00:00.516) 0:00:43.270 ************ 2025-05-19 19:34:24.942607 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:24.942737 | orchestrator | 2025-05-19 19:34:24.943238 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 19:34:24.943942 | orchestrator | Monday 19 May 2025 19:34:24 +0000 (0:00:00.145) 0:00:43.415 ************ 2025-05-19 19:34:25.049733 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:25.050294 | orchestrator | 2025-05-19 19:34:25.050997 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 19:34:25.051782 | orchestrator | Monday 19 May 2025 19:34:25 +0000 (0:00:00.108) 0:00:43.524 ************ 2025-05-19 19:34:25.173637 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:25.173814 | orchestrator | 2025-05-19 19:34:25.176012 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 19:34:25.176070 | orchestrator | Monday 19 May 2025 19:34:25 +0000 (0:00:00.120) 0:00:43.644 ************ 2025-05-19 19:34:25.324537 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:34:25.325399 | orchestrator |  "vgs_report": { 2025-05-19 19:34:25.325753 | orchestrator |  "vg": [] 2025-05-19 19:34:25.326619 | orchestrator |  } 2025-05-19 19:34:25.327182 | orchestrator | } 2025-05-19 19:34:25.329449 | orchestrator | 2025-05-19 19:34:25.330302 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 19:34:25.330501 | orchestrator | Monday 19 May 2025 19:34:25 +0000 (0:00:00.154) 0:00:43.799 ************ 2025-05-19 19:34:25.464447 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:25.464634 | orchestrator | 2025-05-19 19:34:25.465365 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 19:34:25.466535 | orchestrator | Monday 19 May 2025 19:34:25 +0000 (0:00:00.139) 0:00:43.938 ************ 2025-05-19 19:34:25.775768 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:25.776314 | orchestrator | 2025-05-19 19:34:25.777356 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 19:34:25.777585 | orchestrator | Monday 19 May 2025 19:34:25 +0000 (0:00:00.311) 0:00:44.250 ************ 2025-05-19 19:34:25.919259 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:25.919890 | orchestrator | 2025-05-19 19:34:25.920875 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 19:34:25.923467 | orchestrator | Monday 19 May 2025 19:34:25 +0000 (0:00:00.143) 0:00:44.393 ************ 2025-05-19 19:34:26.038597 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.039168 | orchestrator | 2025-05-19 19:34:26.039633 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 19:34:26.040532 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.118) 0:00:44.512 ************ 2025-05-19 19:34:26.177777 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.178560 | orchestrator | 2025-05-19 19:34:26.179530 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 19:34:26.180347 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.139) 0:00:44.651 ************ 2025-05-19 19:34:26.314223 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.314435 | orchestrator | 2025-05-19 19:34:26.314776 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 19:34:26.315658 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.136) 0:00:44.788 ************ 2025-05-19 19:34:26.448509 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.448614 | orchestrator | 2025-05-19 19:34:26.452824 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 19:34:26.452882 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.134) 0:00:44.922 ************ 2025-05-19 19:34:26.595481 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.595715 | orchestrator | 2025-05-19 19:34:26.596382 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 19:34:26.597358 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.146) 0:00:45.069 ************ 2025-05-19 19:34:26.741053 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.741411 | orchestrator | 2025-05-19 19:34:26.742188 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 19:34:26.742992 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.145) 0:00:45.215 ************ 2025-05-19 19:34:26.886391 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:26.886745 | orchestrator | 2025-05-19 19:34:26.887826 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 19:34:26.888559 | orchestrator | Monday 19 May 2025 19:34:26 +0000 (0:00:00.145) 0:00:45.361 ************ 2025-05-19 19:34:27.017602 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:27.017921 | orchestrator | 2025-05-19 19:34:27.020355 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 19:34:27.020410 | orchestrator | Monday 19 May 2025 19:34:27 +0000 (0:00:00.129) 0:00:45.490 ************ 2025-05-19 19:34:27.145652 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:27.146008 | orchestrator | 2025-05-19 19:34:27.146915 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 19:34:27.148752 | orchestrator | Monday 19 May 2025 19:34:27 +0000 (0:00:00.129) 0:00:45.620 ************ 2025-05-19 19:34:27.284596 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:27.284991 | orchestrator | 2025-05-19 19:34:27.285751 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 19:34:27.286437 | orchestrator | Monday 19 May 2025 19:34:27 +0000 (0:00:00.138) 0:00:45.759 ************ 2025-05-19 19:34:27.618725 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:27.619454 | orchestrator | 2025-05-19 19:34:27.620358 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 19:34:27.621577 | orchestrator | Monday 19 May 2025 19:34:27 +0000 (0:00:00.334) 0:00:46.093 ************ 2025-05-19 19:34:27.791810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:27.791931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:27.792883 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:27.793524 | orchestrator | 2025-05-19 19:34:27.793600 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 19:34:27.794406 | orchestrator | Monday 19 May 2025 19:34:27 +0000 (0:00:00.173) 0:00:46.266 ************ 2025-05-19 19:34:27.949707 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:27.950437 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:27.952342 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:27.953540 | orchestrator | 2025-05-19 19:34:27.954315 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 19:34:27.954917 | orchestrator | Monday 19 May 2025 19:34:27 +0000 (0:00:00.156) 0:00:46.423 ************ 2025-05-19 19:34:28.131463 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:28.131587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:28.132323 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:28.132758 | orchestrator | 2025-05-19 19:34:28.135249 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 19:34:28.135775 | orchestrator | Monday 19 May 2025 19:34:28 +0000 (0:00:00.182) 0:00:46.605 ************ 2025-05-19 19:34:28.304955 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:28.305092 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:28.305224 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:28.305778 | orchestrator | 2025-05-19 19:34:28.306111 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 19:34:28.306730 | orchestrator | Monday 19 May 2025 19:34:28 +0000 (0:00:00.173) 0:00:46.779 ************ 2025-05-19 19:34:28.481108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:28.481520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:28.481854 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:28.484687 | orchestrator | 2025-05-19 19:34:28.484714 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 19:34:28.485286 | orchestrator | Monday 19 May 2025 19:34:28 +0000 (0:00:00.175) 0:00:46.954 ************ 2025-05-19 19:34:28.662908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:28.663017 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:28.663029 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:28.663196 | orchestrator | 2025-05-19 19:34:28.664269 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 19:34:28.664441 | orchestrator | Monday 19 May 2025 19:34:28 +0000 (0:00:00.181) 0:00:47.135 ************ 2025-05-19 19:34:28.828059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:28.828384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:28.829005 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:28.829901 | orchestrator | 2025-05-19 19:34:28.830612 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 19:34:28.831300 | orchestrator | Monday 19 May 2025 19:34:28 +0000 (0:00:00.166) 0:00:47.301 ************ 2025-05-19 19:34:28.990911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:28.991533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:28.992546 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:28.993687 | orchestrator | 2025-05-19 19:34:28.996711 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 19:34:28.997089 | orchestrator | Monday 19 May 2025 19:34:28 +0000 (0:00:00.163) 0:00:47.465 ************ 2025-05-19 19:34:29.504773 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:29.504942 | orchestrator | 2025-05-19 19:34:29.504960 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 19:34:29.505041 | orchestrator | Monday 19 May 2025 19:34:29 +0000 (0:00:00.512) 0:00:47.978 ************ 2025-05-19 19:34:30.035928 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:30.036900 | orchestrator | 2025-05-19 19:34:30.039271 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 19:34:30.039298 | orchestrator | Monday 19 May 2025 19:34:30 +0000 (0:00:00.530) 0:00:48.509 ************ 2025-05-19 19:34:30.378846 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:34:30.379018 | orchestrator | 2025-05-19 19:34:30.379282 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 19:34:30.379859 | orchestrator | Monday 19 May 2025 19:34:30 +0000 (0:00:00.344) 0:00:48.853 ************ 2025-05-19 19:34:30.561898 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'vg_name': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}) 2025-05-19 19:34:30.563424 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'vg_name': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'}) 2025-05-19 19:34:30.563455 | orchestrator | 2025-05-19 19:34:30.563468 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 19:34:30.563846 | orchestrator | Monday 19 May 2025 19:34:30 +0000 (0:00:00.183) 0:00:49.036 ************ 2025-05-19 19:34:30.730788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:30.730972 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:30.731226 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:30.731634 | orchestrator | 2025-05-19 19:34:30.731932 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 19:34:30.732579 | orchestrator | Monday 19 May 2025 19:34:30 +0000 (0:00:00.168) 0:00:49.205 ************ 2025-05-19 19:34:30.891838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:30.892041 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:30.892594 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:30.893252 | orchestrator | 2025-05-19 19:34:30.893879 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 19:34:30.894580 | orchestrator | Monday 19 May 2025 19:34:30 +0000 (0:00:00.160) 0:00:49.366 ************ 2025-05-19 19:34:31.052953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'})  2025-05-19 19:34:31.053215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'})  2025-05-19 19:34:31.053599 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:34:31.053902 | orchestrator | 2025-05-19 19:34:31.054524 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 19:34:31.055005 | orchestrator | Monday 19 May 2025 19:34:31 +0000 (0:00:00.160) 0:00:49.527 ************ 2025-05-19 19:34:31.875472 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:34:31.875802 | orchestrator |  "lvm_report": { 2025-05-19 19:34:31.876039 | orchestrator |  "lv": [ 2025-05-19 19:34:31.876925 | orchestrator |  { 2025-05-19 19:34:31.877303 | orchestrator |  "lv_name": "osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e", 2025-05-19 19:34:31.879840 | orchestrator |  "vg_name": "ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e" 2025-05-19 19:34:31.881112 | orchestrator |  }, 2025-05-19 19:34:31.882843 | orchestrator |  { 2025-05-19 19:34:31.884341 | orchestrator |  "lv_name": "osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b", 2025-05-19 19:34:31.884450 | orchestrator |  "vg_name": "ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b" 2025-05-19 19:34:31.885405 | orchestrator |  } 2025-05-19 19:34:31.885631 | orchestrator |  ], 2025-05-19 19:34:31.886083 | orchestrator |  "pv": [ 2025-05-19 19:34:31.886824 | orchestrator |  { 2025-05-19 19:34:31.887491 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 19:34:31.888052 | orchestrator |  "vg_name": "ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e" 2025-05-19 19:34:31.888487 | orchestrator |  }, 2025-05-19 19:34:31.888831 | orchestrator |  { 2025-05-19 19:34:31.889678 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 19:34:31.889719 | orchestrator |  "vg_name": "ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b" 2025-05-19 19:34:31.890084 | orchestrator |  } 2025-05-19 19:34:31.891205 | orchestrator |  ] 2025-05-19 19:34:31.891462 | orchestrator |  } 2025-05-19 19:34:31.891934 | orchestrator | } 2025-05-19 19:34:31.892378 | orchestrator | 2025-05-19 19:34:31.893049 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 19:34:31.893349 | orchestrator | 2025-05-19 19:34:31.893713 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 19:34:31.894244 | orchestrator | Monday 19 May 2025 19:34:31 +0000 (0:00:00.822) 0:00:50.350 ************ 2025-05-19 19:34:32.116307 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 19:34:32.116908 | orchestrator | 2025-05-19 19:34:32.117666 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 19:34:32.118682 | orchestrator | Monday 19 May 2025 19:34:32 +0000 (0:00:00.239) 0:00:50.589 ************ 2025-05-19 19:34:32.343421 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:32.344165 | orchestrator | 2025-05-19 19:34:32.344906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:32.345937 | orchestrator | Monday 19 May 2025 19:34:32 +0000 (0:00:00.228) 0:00:50.817 ************ 2025-05-19 19:34:32.807624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-19 19:34:32.809395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-19 19:34:32.810304 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-19 19:34:32.811687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-19 19:34:32.812845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-19 19:34:32.813826 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-19 19:34:32.814831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-19 19:34:32.815671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-19 19:34:32.816276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-19 19:34:32.816838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-19 19:34:32.817634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-19 19:34:32.817941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-19 19:34:32.818417 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-19 19:34:32.818960 | orchestrator | 2025-05-19 19:34:32.819251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:32.819660 | orchestrator | Monday 19 May 2025 19:34:32 +0000 (0:00:00.464) 0:00:51.282 ************ 2025-05-19 19:34:32.999630 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:32.999812 | orchestrator | 2025-05-19 19:34:33.000754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:33.001167 | orchestrator | Monday 19 May 2025 19:34:32 +0000 (0:00:00.191) 0:00:51.473 ************ 2025-05-19 19:34:33.187446 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:33.187639 | orchestrator | 2025-05-19 19:34:33.188271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:33.188760 | orchestrator | Monday 19 May 2025 19:34:33 +0000 (0:00:00.186) 0:00:51.660 ************ 2025-05-19 19:34:33.393319 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:33.393515 | orchestrator | 2025-05-19 19:34:33.394253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:33.394931 | orchestrator | Monday 19 May 2025 19:34:33 +0000 (0:00:00.207) 0:00:51.868 ************ 2025-05-19 19:34:33.586001 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:33.586331 | orchestrator | 2025-05-19 19:34:33.586524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:33.587233 | orchestrator | Monday 19 May 2025 19:34:33 +0000 (0:00:00.191) 0:00:52.060 ************ 2025-05-19 19:34:33.776032 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:33.776156 | orchestrator | 2025-05-19 19:34:33.777418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:33.778681 | orchestrator | Monday 19 May 2025 19:34:33 +0000 (0:00:00.190) 0:00:52.250 ************ 2025-05-19 19:34:34.166310 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:34.166415 | orchestrator | 2025-05-19 19:34:34.167229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:34.169348 | orchestrator | Monday 19 May 2025 19:34:34 +0000 (0:00:00.388) 0:00:52.639 ************ 2025-05-19 19:34:34.369672 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:34.369861 | orchestrator | 2025-05-19 19:34:34.371185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:34.372388 | orchestrator | Monday 19 May 2025 19:34:34 +0000 (0:00:00.204) 0:00:52.843 ************ 2025-05-19 19:34:34.560673 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:34.561496 | orchestrator | 2025-05-19 19:34:34.561532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:34.561547 | orchestrator | Monday 19 May 2025 19:34:34 +0000 (0:00:00.190) 0:00:53.034 ************ 2025-05-19 19:34:34.970727 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4) 2025-05-19 19:34:34.971248 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4) 2025-05-19 19:34:34.971329 | orchestrator | 2025-05-19 19:34:34.971879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:34.972238 | orchestrator | Monday 19 May 2025 19:34:34 +0000 (0:00:00.410) 0:00:53.445 ************ 2025-05-19 19:34:35.425412 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091) 2025-05-19 19:34:35.425702 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091) 2025-05-19 19:34:35.425836 | orchestrator | 2025-05-19 19:34:35.426567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:35.427082 | orchestrator | Monday 19 May 2025 19:34:35 +0000 (0:00:00.450) 0:00:53.895 ************ 2025-05-19 19:34:35.840725 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20) 2025-05-19 19:34:35.841227 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20) 2025-05-19 19:34:35.841924 | orchestrator | 2025-05-19 19:34:35.842548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:35.843693 | orchestrator | Monday 19 May 2025 19:34:35 +0000 (0:00:00.418) 0:00:54.314 ************ 2025-05-19 19:34:36.275935 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be) 2025-05-19 19:34:36.276174 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be) 2025-05-19 19:34:36.279347 | orchestrator | 2025-05-19 19:34:36.279437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 19:34:36.279453 | orchestrator | Monday 19 May 2025 19:34:36 +0000 (0:00:00.434) 0:00:54.749 ************ 2025-05-19 19:34:36.608764 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 19:34:36.608927 | orchestrator | 2025-05-19 19:34:36.609168 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:36.609677 | orchestrator | Monday 19 May 2025 19:34:36 +0000 (0:00:00.333) 0:00:55.082 ************ 2025-05-19 19:34:37.054434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-19 19:34:37.054655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-19 19:34:37.055475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-19 19:34:37.058104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-19 19:34:37.058849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-19 19:34:37.059695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-19 19:34:37.060423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-19 19:34:37.061191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-19 19:34:37.061695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-19 19:34:37.062088 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-19 19:34:37.062460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-19 19:34:37.062913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-19 19:34:37.063348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-19 19:34:37.063697 | orchestrator | 2025-05-19 19:34:37.064252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:37.064962 | orchestrator | Monday 19 May 2025 19:34:37 +0000 (0:00:00.445) 0:00:55.528 ************ 2025-05-19 19:34:37.672620 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:37.672849 | orchestrator | 2025-05-19 19:34:37.673534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:37.675842 | orchestrator | Monday 19 May 2025 19:34:37 +0000 (0:00:00.615) 0:00:56.144 ************ 2025-05-19 19:34:37.862986 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:37.863091 | orchestrator | 2025-05-19 19:34:37.863107 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:37.863408 | orchestrator | Monday 19 May 2025 19:34:37 +0000 (0:00:00.191) 0:00:56.336 ************ 2025-05-19 19:34:38.066713 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:38.066922 | orchestrator | 2025-05-19 19:34:38.067531 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:38.068345 | orchestrator | Monday 19 May 2025 19:34:38 +0000 (0:00:00.204) 0:00:56.541 ************ 2025-05-19 19:34:38.269193 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:38.269301 | orchestrator | 2025-05-19 19:34:38.271675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:38.271710 | orchestrator | Monday 19 May 2025 19:34:38 +0000 (0:00:00.200) 0:00:56.741 ************ 2025-05-19 19:34:38.471121 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:38.473419 | orchestrator | 2025-05-19 19:34:38.474253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:38.475405 | orchestrator | Monday 19 May 2025 19:34:38 +0000 (0:00:00.202) 0:00:56.944 ************ 2025-05-19 19:34:38.668611 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:38.669278 | orchestrator | 2025-05-19 19:34:38.669807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:38.670771 | orchestrator | Monday 19 May 2025 19:34:38 +0000 (0:00:00.198) 0:00:57.143 ************ 2025-05-19 19:34:38.868559 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:38.869647 | orchestrator | 2025-05-19 19:34:38.870060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:38.870899 | orchestrator | Monday 19 May 2025 19:34:38 +0000 (0:00:00.199) 0:00:57.343 ************ 2025-05-19 19:34:39.091697 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:39.092296 | orchestrator | 2025-05-19 19:34:39.093290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:39.093516 | orchestrator | Monday 19 May 2025 19:34:39 +0000 (0:00:00.223) 0:00:57.566 ************ 2025-05-19 19:34:39.954507 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-19 19:34:39.955333 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-19 19:34:39.955422 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-19 19:34:39.955653 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-19 19:34:39.956393 | orchestrator | 2025-05-19 19:34:39.956738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:39.957393 | orchestrator | Monday 19 May 2025 19:34:39 +0000 (0:00:00.860) 0:00:58.427 ************ 2025-05-19 19:34:40.164722 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:40.165559 | orchestrator | 2025-05-19 19:34:40.166384 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:40.167215 | orchestrator | Monday 19 May 2025 19:34:40 +0000 (0:00:00.211) 0:00:58.639 ************ 2025-05-19 19:34:40.790109 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:40.791324 | orchestrator | 2025-05-19 19:34:40.793124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:40.793169 | orchestrator | Monday 19 May 2025 19:34:40 +0000 (0:00:00.623) 0:00:59.262 ************ 2025-05-19 19:34:40.978118 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:40.978838 | orchestrator | 2025-05-19 19:34:40.979823 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 19:34:40.981002 | orchestrator | Monday 19 May 2025 19:34:40 +0000 (0:00:00.189) 0:00:59.452 ************ 2025-05-19 19:34:41.167723 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:41.168170 | orchestrator | 2025-05-19 19:34:41.168765 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 19:34:41.171651 | orchestrator | Monday 19 May 2025 19:34:41 +0000 (0:00:00.189) 0:00:59.641 ************ 2025-05-19 19:34:41.320796 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:41.320952 | orchestrator | 2025-05-19 19:34:41.322848 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 19:34:41.323461 | orchestrator | Monday 19 May 2025 19:34:41 +0000 (0:00:00.150) 0:00:59.792 ************ 2025-05-19 19:34:41.521539 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}}) 2025-05-19 19:34:41.521733 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5646b4ad-081a-5fe7-ab17-c0ecc5756623'}}) 2025-05-19 19:34:41.524302 | orchestrator | 2025-05-19 19:34:41.524520 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 19:34:41.526955 | orchestrator | Monday 19 May 2025 19:34:41 +0000 (0:00:00.203) 0:00:59.996 ************ 2025-05-19 19:34:43.315863 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}) 2025-05-19 19:34:43.315969 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'}) 2025-05-19 19:34:43.316738 | orchestrator | 2025-05-19 19:34:43.318881 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 19:34:43.319781 | orchestrator | Monday 19 May 2025 19:34:43 +0000 (0:00:01.792) 0:01:01.788 ************ 2025-05-19 19:34:43.476831 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:43.477756 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:43.477783 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:43.478805 | orchestrator | 2025-05-19 19:34:43.479262 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 19:34:43.481385 | orchestrator | Monday 19 May 2025 19:34:43 +0000 (0:00:00.162) 0:01:01.951 ************ 2025-05-19 19:34:44.746639 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}) 2025-05-19 19:34:44.746750 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'}) 2025-05-19 19:34:44.747348 | orchestrator | 2025-05-19 19:34:44.749578 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 19:34:44.749694 | orchestrator | Monday 19 May 2025 19:34:44 +0000 (0:00:01.267) 0:01:03.218 ************ 2025-05-19 19:34:44.922319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:44.922529 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:44.923777 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:44.924706 | orchestrator | 2025-05-19 19:34:44.925956 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 19:34:44.926941 | orchestrator | Monday 19 May 2025 19:34:44 +0000 (0:00:00.177) 0:01:03.396 ************ 2025-05-19 19:34:45.258243 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:45.258424 | orchestrator | 2025-05-19 19:34:45.258590 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 19:34:45.259091 | orchestrator | Monday 19 May 2025 19:34:45 +0000 (0:00:00.333) 0:01:03.730 ************ 2025-05-19 19:34:45.444866 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:45.444968 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:45.445274 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:45.446475 | orchestrator | 2025-05-19 19:34:45.446868 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 19:34:45.448063 | orchestrator | Monday 19 May 2025 19:34:45 +0000 (0:00:00.186) 0:01:03.916 ************ 2025-05-19 19:34:45.596536 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:45.598766 | orchestrator | 2025-05-19 19:34:45.600042 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 19:34:45.601438 | orchestrator | Monday 19 May 2025 19:34:45 +0000 (0:00:00.154) 0:01:04.071 ************ 2025-05-19 19:34:45.766799 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:45.766888 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:45.766898 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:45.766907 | orchestrator | 2025-05-19 19:34:45.767090 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 19:34:45.767699 | orchestrator | Monday 19 May 2025 19:34:45 +0000 (0:00:00.170) 0:01:04.241 ************ 2025-05-19 19:34:45.914988 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:45.915623 | orchestrator | 2025-05-19 19:34:45.916301 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 19:34:45.916950 | orchestrator | Monday 19 May 2025 19:34:45 +0000 (0:00:00.147) 0:01:04.388 ************ 2025-05-19 19:34:46.083567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:46.084162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:46.085643 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:46.087725 | orchestrator | 2025-05-19 19:34:46.087762 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 19:34:46.087777 | orchestrator | Monday 19 May 2025 19:34:46 +0000 (0:00:00.168) 0:01:04.557 ************ 2025-05-19 19:34:46.238807 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:46.238990 | orchestrator | 2025-05-19 19:34:46.239496 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 19:34:46.241282 | orchestrator | Monday 19 May 2025 19:34:46 +0000 (0:00:00.154) 0:01:04.712 ************ 2025-05-19 19:34:46.425111 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:46.425873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:46.426605 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:46.427403 | orchestrator | 2025-05-19 19:34:46.429306 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 19:34:46.429330 | orchestrator | Monday 19 May 2025 19:34:46 +0000 (0:00:00.186) 0:01:04.898 ************ 2025-05-19 19:34:46.583386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:46.583884 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:46.584390 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:46.585214 | orchestrator | 2025-05-19 19:34:46.585320 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 19:34:46.586165 | orchestrator | Monday 19 May 2025 19:34:46 +0000 (0:00:00.158) 0:01:05.057 ************ 2025-05-19 19:34:46.760424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:46.760599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:46.760687 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:46.761214 | orchestrator | 2025-05-19 19:34:46.762402 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 19:34:46.762691 | orchestrator | Monday 19 May 2025 19:34:46 +0000 (0:00:00.178) 0:01:05.235 ************ 2025-05-19 19:34:46.894581 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:46.894675 | orchestrator | 2025-05-19 19:34:46.895844 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 19:34:46.896203 | orchestrator | Monday 19 May 2025 19:34:46 +0000 (0:00:00.133) 0:01:05.368 ************ 2025-05-19 19:34:47.263217 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:47.263294 | orchestrator | 2025-05-19 19:34:47.263637 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 19:34:47.266388 | orchestrator | Monday 19 May 2025 19:34:47 +0000 (0:00:00.366) 0:01:05.735 ************ 2025-05-19 19:34:47.391558 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:47.391985 | orchestrator | 2025-05-19 19:34:47.393280 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 19:34:47.393753 | orchestrator | Monday 19 May 2025 19:34:47 +0000 (0:00:00.130) 0:01:05.865 ************ 2025-05-19 19:34:47.541779 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:34:47.545286 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 19:34:47.545331 | orchestrator | } 2025-05-19 19:34:47.545518 | orchestrator | 2025-05-19 19:34:47.546557 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 19:34:47.547324 | orchestrator | Monday 19 May 2025 19:34:47 +0000 (0:00:00.148) 0:01:06.014 ************ 2025-05-19 19:34:47.678642 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:34:47.678978 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 19:34:47.679626 | orchestrator | } 2025-05-19 19:34:47.680333 | orchestrator | 2025-05-19 19:34:47.682586 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 19:34:47.682614 | orchestrator | Monday 19 May 2025 19:34:47 +0000 (0:00:00.138) 0:01:06.152 ************ 2025-05-19 19:34:47.820960 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:34:47.821107 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 19:34:47.821877 | orchestrator | } 2025-05-19 19:34:47.821895 | orchestrator | 2025-05-19 19:34:47.822211 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 19:34:47.822645 | orchestrator | Monday 19 May 2025 19:34:47 +0000 (0:00:00.142) 0:01:06.295 ************ 2025-05-19 19:34:48.327787 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:48.327895 | orchestrator | 2025-05-19 19:34:48.328193 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 19:34:48.328935 | orchestrator | Monday 19 May 2025 19:34:48 +0000 (0:00:00.506) 0:01:06.802 ************ 2025-05-19 19:34:48.836189 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:48.836297 | orchestrator | 2025-05-19 19:34:48.836706 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 19:34:48.839829 | orchestrator | Monday 19 May 2025 19:34:48 +0000 (0:00:00.506) 0:01:07.309 ************ 2025-05-19 19:34:49.330965 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:49.331066 | orchestrator | 2025-05-19 19:34:49.331080 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 19:34:49.331182 | orchestrator | Monday 19 May 2025 19:34:49 +0000 (0:00:00.494) 0:01:07.803 ************ 2025-05-19 19:34:49.484820 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:49.485290 | orchestrator | 2025-05-19 19:34:49.486405 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 19:34:49.486539 | orchestrator | Monday 19 May 2025 19:34:49 +0000 (0:00:00.154) 0:01:07.958 ************ 2025-05-19 19:34:49.601015 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:49.601747 | orchestrator | 2025-05-19 19:34:49.603389 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 19:34:49.605284 | orchestrator | Monday 19 May 2025 19:34:49 +0000 (0:00:00.116) 0:01:08.075 ************ 2025-05-19 19:34:49.713475 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:49.714368 | orchestrator | 2025-05-19 19:34:49.715546 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 19:34:49.716453 | orchestrator | Monday 19 May 2025 19:34:49 +0000 (0:00:00.112) 0:01:08.187 ************ 2025-05-19 19:34:50.046194 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:34:50.046438 | orchestrator |  "vgs_report": { 2025-05-19 19:34:50.046852 | orchestrator |  "vg": [] 2025-05-19 19:34:50.047713 | orchestrator |  } 2025-05-19 19:34:50.048285 | orchestrator | } 2025-05-19 19:34:50.048661 | orchestrator | 2025-05-19 19:34:50.049183 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 19:34:50.049812 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.332) 0:01:08.520 ************ 2025-05-19 19:34:50.185106 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:50.186489 | orchestrator | 2025-05-19 19:34:50.187095 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 19:34:50.187738 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.138) 0:01:08.658 ************ 2025-05-19 19:34:50.320652 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:50.321197 | orchestrator | 2025-05-19 19:34:50.321836 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 19:34:50.322846 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.136) 0:01:08.795 ************ 2025-05-19 19:34:50.455939 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:50.456667 | orchestrator | 2025-05-19 19:34:50.457828 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 19:34:50.458575 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.135) 0:01:08.930 ************ 2025-05-19 19:34:50.597512 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:50.599507 | orchestrator | 2025-05-19 19:34:50.600797 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 19:34:50.601356 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.140) 0:01:09.070 ************ 2025-05-19 19:34:50.737837 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:50.738014 | orchestrator | 2025-05-19 19:34:50.739216 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 19:34:50.740353 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.140) 0:01:09.211 ************ 2025-05-19 19:34:50.878871 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:50.879324 | orchestrator | 2025-05-19 19:34:50.880190 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 19:34:50.882581 | orchestrator | Monday 19 May 2025 19:34:50 +0000 (0:00:00.141) 0:01:09.352 ************ 2025-05-19 19:34:51.029260 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:51.029497 | orchestrator | 2025-05-19 19:34:51.029598 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 19:34:51.030351 | orchestrator | Monday 19 May 2025 19:34:51 +0000 (0:00:00.148) 0:01:09.501 ************ 2025-05-19 19:34:51.159609 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:51.160257 | orchestrator | 2025-05-19 19:34:51.160609 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 19:34:51.160930 | orchestrator | Monday 19 May 2025 19:34:51 +0000 (0:00:00.132) 0:01:09.634 ************ 2025-05-19 19:34:51.310624 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:51.311015 | orchestrator | 2025-05-19 19:34:51.312071 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 19:34:51.312717 | orchestrator | Monday 19 May 2025 19:34:51 +0000 (0:00:00.150) 0:01:09.785 ************ 2025-05-19 19:34:51.449332 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:51.449524 | orchestrator | 2025-05-19 19:34:51.449740 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 19:34:51.450605 | orchestrator | Monday 19 May 2025 19:34:51 +0000 (0:00:00.138) 0:01:09.923 ************ 2025-05-19 19:34:51.596013 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:51.596114 | orchestrator | 2025-05-19 19:34:51.596608 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 19:34:51.597766 | orchestrator | Monday 19 May 2025 19:34:51 +0000 (0:00:00.145) 0:01:10.069 ************ 2025-05-19 19:34:51.931913 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:51.932026 | orchestrator | 2025-05-19 19:34:51.932661 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 19:34:51.933172 | orchestrator | Monday 19 May 2025 19:34:51 +0000 (0:00:00.335) 0:01:10.405 ************ 2025-05-19 19:34:52.087175 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:52.087265 | orchestrator | 2025-05-19 19:34:52.087807 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 19:34:52.088311 | orchestrator | Monday 19 May 2025 19:34:52 +0000 (0:00:00.156) 0:01:10.562 ************ 2025-05-19 19:34:52.237082 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:52.237463 | orchestrator | 2025-05-19 19:34:52.238156 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 19:34:52.239202 | orchestrator | Monday 19 May 2025 19:34:52 +0000 (0:00:00.149) 0:01:10.711 ************ 2025-05-19 19:34:52.403654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:52.404213 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:52.404813 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:52.405328 | orchestrator | 2025-05-19 19:34:52.407832 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 19:34:52.407857 | orchestrator | Monday 19 May 2025 19:34:52 +0000 (0:00:00.166) 0:01:10.878 ************ 2025-05-19 19:34:52.593069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:52.594433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:52.596764 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:52.596791 | orchestrator | 2025-05-19 19:34:52.596805 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 19:34:52.597744 | orchestrator | Monday 19 May 2025 19:34:52 +0000 (0:00:00.188) 0:01:11.066 ************ 2025-05-19 19:34:52.754793 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:52.755359 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:52.757569 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:52.757594 | orchestrator | 2025-05-19 19:34:52.758421 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 19:34:52.758948 | orchestrator | Monday 19 May 2025 19:34:52 +0000 (0:00:00.161) 0:01:11.228 ************ 2025-05-19 19:34:52.912621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:52.913766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:52.914538 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:52.915689 | orchestrator | 2025-05-19 19:34:52.916498 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 19:34:52.917103 | orchestrator | Monday 19 May 2025 19:34:52 +0000 (0:00:00.158) 0:01:11.386 ************ 2025-05-19 19:34:53.089370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:53.090231 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:53.090939 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:53.092224 | orchestrator | 2025-05-19 19:34:53.093327 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 19:34:53.093786 | orchestrator | Monday 19 May 2025 19:34:53 +0000 (0:00:00.175) 0:01:11.562 ************ 2025-05-19 19:34:53.266349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:53.268374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:53.270886 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:53.272859 | orchestrator | 2025-05-19 19:34:53.273609 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 19:34:53.274382 | orchestrator | Monday 19 May 2025 19:34:53 +0000 (0:00:00.178) 0:01:11.741 ************ 2025-05-19 19:34:53.447574 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:53.448482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:53.448792 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:53.449204 | orchestrator | 2025-05-19 19:34:53.449630 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 19:34:53.450182 | orchestrator | Monday 19 May 2025 19:34:53 +0000 (0:00:00.180) 0:01:11.921 ************ 2025-05-19 19:34:53.622399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:53.622891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:53.623684 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:53.624446 | orchestrator | 2025-05-19 19:34:53.625223 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 19:34:53.625647 | orchestrator | Monday 19 May 2025 19:34:53 +0000 (0:00:00.173) 0:01:12.095 ************ 2025-05-19 19:34:54.317575 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:54.317793 | orchestrator | 2025-05-19 19:34:54.318721 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 19:34:54.319579 | orchestrator | Monday 19 May 2025 19:34:54 +0000 (0:00:00.696) 0:01:12.791 ************ 2025-05-19 19:34:54.850320 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:54.850454 | orchestrator | 2025-05-19 19:34:54.851076 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 19:34:54.851631 | orchestrator | Monday 19 May 2025 19:34:54 +0000 (0:00:00.531) 0:01:13.323 ************ 2025-05-19 19:34:54.999023 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:34:54.999466 | orchestrator | 2025-05-19 19:34:54.999505 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 19:34:54.999816 | orchestrator | Monday 19 May 2025 19:34:54 +0000 (0:00:00.149) 0:01:13.473 ************ 2025-05-19 19:34:55.166504 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'vg_name': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'}) 2025-05-19 19:34:55.167160 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'vg_name': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}) 2025-05-19 19:34:55.167473 | orchestrator | 2025-05-19 19:34:55.167826 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 19:34:55.168537 | orchestrator | Monday 19 May 2025 19:34:55 +0000 (0:00:00.168) 0:01:13.641 ************ 2025-05-19 19:34:55.336199 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:55.337263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:55.337968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:55.338955 | orchestrator | 2025-05-19 19:34:55.340439 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 19:34:55.341463 | orchestrator | Monday 19 May 2025 19:34:55 +0000 (0:00:00.169) 0:01:13.810 ************ 2025-05-19 19:34:55.495166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:55.495268 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:55.496008 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:55.497723 | orchestrator | 2025-05-19 19:34:55.498427 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 19:34:55.499304 | orchestrator | Monday 19 May 2025 19:34:55 +0000 (0:00:00.158) 0:01:13.969 ************ 2025-05-19 19:34:55.665220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'})  2025-05-19 19:34:55.665714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'})  2025-05-19 19:34:55.666666 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:34:55.667536 | orchestrator | 2025-05-19 19:34:55.668283 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 19:34:55.669070 | orchestrator | Monday 19 May 2025 19:34:55 +0000 (0:00:00.168) 0:01:14.137 ************ 2025-05-19 19:34:56.094619 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:34:56.097277 | orchestrator |  "lvm_report": { 2025-05-19 19:34:56.098220 | orchestrator |  "lv": [ 2025-05-19 19:34:56.099460 | orchestrator |  { 2025-05-19 19:34:56.100238 | orchestrator |  "lv_name": "osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623", 2025-05-19 19:34:56.101191 | orchestrator |  "vg_name": "ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623" 2025-05-19 19:34:56.101889 | orchestrator |  }, 2025-05-19 19:34:56.102539 | orchestrator |  { 2025-05-19 19:34:56.103824 | orchestrator |  "lv_name": "osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d", 2025-05-19 19:34:56.104823 | orchestrator |  "vg_name": "ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d" 2025-05-19 19:34:56.105631 | orchestrator |  } 2025-05-19 19:34:56.106372 | orchestrator |  ], 2025-05-19 19:34:56.107048 | orchestrator |  "pv": [ 2025-05-19 19:34:56.107636 | orchestrator |  { 2025-05-19 19:34:56.108305 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 19:34:56.108727 | orchestrator |  "vg_name": "ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d" 2025-05-19 19:34:56.109464 | orchestrator |  }, 2025-05-19 19:34:56.110006 | orchestrator |  { 2025-05-19 19:34:56.110814 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 19:34:56.111282 | orchestrator |  "vg_name": "ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623" 2025-05-19 19:34:56.111929 | orchestrator |  } 2025-05-19 19:34:56.112588 | orchestrator |  ] 2025-05-19 19:34:56.113226 | orchestrator |  } 2025-05-19 19:34:56.113990 | orchestrator | } 2025-05-19 19:34:56.114574 | orchestrator | 2025-05-19 19:34:56.115298 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:34:56.115426 | orchestrator | 2025-05-19 19:34:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:34:56.115447 | orchestrator | 2025-05-19 19:34:56 | INFO  | Please wait and do not abort execution. 2025-05-19 19:34:56.115829 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 19:34:56.117645 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 19:34:56.118839 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 19:34:56.119480 | orchestrator | 2025-05-19 19:34:56.120644 | orchestrator | 2025-05-19 19:34:56.121565 | orchestrator | 2025-05-19 19:34:56.122525 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:34:56.123391 | orchestrator | Monday 19 May 2025 19:34:56 +0000 (0:00:00.430) 0:01:14.568 ************ 2025-05-19 19:34:56.124486 | orchestrator | =============================================================================== 2025-05-19 19:34:56.126482 | orchestrator | Create block VGs -------------------------------------------------------- 5.78s 2025-05-19 19:34:56.127116 | orchestrator | Create block LVs -------------------------------------------------------- 4.07s 2025-05-19 19:34:56.127699 | orchestrator | Print LVM report data --------------------------------------------------- 1.94s 2025-05-19 19:34:56.128409 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.91s 2025-05-19 19:34:56.128891 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.73s 2025-05-19 19:34:56.129697 | orchestrator | Add known links to the list of available block devices ------------------ 1.59s 2025-05-19 19:34:56.130427 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.59s 2025-05-19 19:34:56.130749 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-05-19 19:34:56.131554 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.52s 2025-05-19 19:34:56.131788 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-05-19 19:34:56.132654 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.06s 2025-05-19 19:34:56.133197 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-05-19 19:34:56.133853 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-05-19 19:34:56.134232 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.77s 2025-05-19 19:34:56.135030 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.71s 2025-05-19 19:34:56.135288 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-05-19 19:34:56.136956 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-05-19 19:34:56.138181 | orchestrator | Fail if number of OSDs exceeds num_osds for a WAL VG -------------------- 0.66s 2025-05-19 19:34:56.138982 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-05-19 19:34:56.139655 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.65s 2025-05-19 19:34:58.102964 | orchestrator | 2025-05-19 19:34:58 | INFO  | Task 9591fcc2-e834-4412-a201-7a9e2b42f982 (facts) was prepared for execution. 2025-05-19 19:34:58.103048 | orchestrator | 2025-05-19 19:34:58 | INFO  | It takes a moment until task 9591fcc2-e834-4412-a201-7a9e2b42f982 (facts) has been started and output is visible here. 2025-05-19 19:35:01.288661 | orchestrator | 2025-05-19 19:35:01.288755 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 19:35:01.289756 | orchestrator | 2025-05-19 19:35:01.289850 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 19:35:01.289928 | orchestrator | Monday 19 May 2025 19:35:01 +0000 (0:00:00.217) 0:00:00.217 ************ 2025-05-19 19:35:02.295962 | orchestrator | ok: [testbed-manager] 2025-05-19 19:35:02.297187 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:35:02.297928 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:35:02.300608 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:35:02.301309 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:35:02.302243 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:35:02.303195 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:35:02.303390 | orchestrator | 2025-05-19 19:35:02.304251 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 19:35:02.305541 | orchestrator | Monday 19 May 2025 19:35:02 +0000 (0:00:01.009) 0:00:01.227 ************ 2025-05-19 19:35:02.451918 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:35:02.528902 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:35:02.606402 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:35:02.684051 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:35:02.761079 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:35:03.477233 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:35:03.477859 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:35:03.478682 | orchestrator | 2025-05-19 19:35:03.479441 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 19:35:03.480146 | orchestrator | 2025-05-19 19:35:03.480758 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 19:35:03.481258 | orchestrator | Monday 19 May 2025 19:35:03 +0000 (0:00:01.181) 0:00:02.409 ************ 2025-05-19 19:35:08.121077 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:35:08.121252 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:35:08.122123 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:35:08.122406 | orchestrator | ok: [testbed-manager] 2025-05-19 19:35:08.123412 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:35:08.124010 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:35:08.124368 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:35:08.125436 | orchestrator | 2025-05-19 19:35:08.125866 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 19:35:08.126189 | orchestrator | 2025-05-19 19:35:08.126666 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 19:35:08.127073 | orchestrator | Monday 19 May 2025 19:35:08 +0000 (0:00:04.645) 0:00:07.054 ************ 2025-05-19 19:35:08.454539 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:35:08.533517 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:35:08.610201 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:35:08.697401 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:35:08.775628 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:35:08.818436 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:35:08.818537 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:35:08.819577 | orchestrator | 2025-05-19 19:35:08.820218 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:35:08.821867 | orchestrator | 2025-05-19 19:35:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 19:35:08.821970 | orchestrator | 2025-05-19 19:35:08 | INFO  | Please wait and do not abort execution. 2025-05-19 19:35:08.823365 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.823876 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.825077 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.825609 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.827253 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.827700 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.828188 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:35:08.828821 | orchestrator | 2025-05-19 19:35:08.829324 | orchestrator | Monday 19 May 2025 19:35:08 +0000 (0:00:00.698) 0:00:07.753 ************ 2025-05-19 19:35:08.829747 | orchestrator | =============================================================================== 2025-05-19 19:35:08.830240 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.65s 2025-05-19 19:35:08.830711 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-05-19 19:35:08.831198 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-05-19 19:35:08.831758 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2025-05-19 19:35:09.445782 | orchestrator | 2025-05-19 19:35:09.447448 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon May 19 19:35:09 UTC 2025 2025-05-19 19:35:09.447483 | orchestrator | 2025-05-19 19:35:10.856397 | orchestrator | 2025-05-19 19:35:10 | INFO  | Collection nutshell is prepared for execution 2025-05-19 19:35:10.856503 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [0] - dotfiles 2025-05-19 19:35:10.861011 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [0] - homer 2025-05-19 19:35:10.861054 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [0] - netdata 2025-05-19 19:35:10.861067 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [0] - openstackclient 2025-05-19 19:35:10.861080 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [0] - phpmyadmin 2025-05-19 19:35:10.861121 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [0] - common 2025-05-19 19:35:10.862582 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [1] -- loadbalancer 2025-05-19 19:35:10.862604 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [2] --- opensearch 2025-05-19 19:35:10.863587 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [2] --- mariadb-ng 2025-05-19 19:35:10.863608 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [3] ---- horizon 2025-05-19 19:35:10.863619 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [3] ---- keystone 2025-05-19 19:35:10.863659 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [4] ----- neutron 2025-05-19 19:35:10.863671 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [5] ------ wait-for-nova 2025-05-19 19:35:10.863683 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [5] ------ octavia 2025-05-19 19:35:10.863749 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [4] ----- barbican 2025-05-19 19:35:10.863764 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [4] ----- designate 2025-05-19 19:35:10.863774 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [4] ----- ironic 2025-05-19 19:35:10.863785 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [4] ----- placement 2025-05-19 19:35:10.863796 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [4] ----- magnum 2025-05-19 19:35:10.864149 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [1] -- openvswitch 2025-05-19 19:35:10.864169 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [2] --- ovn 2025-05-19 19:35:10.864368 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [1] -- memcached 2025-05-19 19:35:10.864388 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [1] -- redis 2025-05-19 19:35:10.864399 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [1] -- rabbitmq-ng 2025-05-19 19:35:10.864452 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [0] - kubernetes 2025-05-19 19:35:10.864467 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [1] -- kubeconfig 2025-05-19 19:35:10.864478 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [1] -- copy-kubeconfig 2025-05-19 19:35:10.864769 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [0] - ceph 2025-05-19 19:35:10.865911 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [1] -- ceph-pools 2025-05-19 19:35:10.865958 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [2] --- copy-ceph-keys 2025-05-19 19:35:10.866086 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [3] ---- cephclient 2025-05-19 19:35:10.866105 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-19 19:35:10.866190 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [4] ----- wait-for-keystone 2025-05-19 19:35:10.866207 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-19 19:35:10.866217 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [5] ------ glance 2025-05-19 19:35:10.866228 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [5] ------ cinder 2025-05-19 19:35:10.866239 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [5] ------ nova 2025-05-19 19:35:10.866555 | orchestrator | 2025-05-19 19:35:10 | INFO  | A [4] ----- prometheus 2025-05-19 19:35:10.866575 | orchestrator | 2025-05-19 19:35:10 | INFO  | D [5] ------ grafana 2025-05-19 19:35:10.992505 | orchestrator | 2025-05-19 19:35:10 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-19 19:35:10.994248 | orchestrator | 2025-05-19 19:35:10 | INFO  | Tasks are running in the background 2025-05-19 19:35:12.790539 | orchestrator | 2025-05-19 19:35:12 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-19 19:35:14.962718 | orchestrator | 2025-05-19 19:35:14 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:14.962832 | orchestrator | 2025-05-19 19:35:14 | INFO  | Task d7c0ede2-98d9-4007-805c-a108bb3000b2 is in state STARTED 2025-05-19 19:35:14.964290 | orchestrator | 2025-05-19 19:35:14 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:14.964318 | orchestrator | 2025-05-19 19:35:14 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:14.964729 | orchestrator | 2025-05-19 19:35:14 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:14.966455 | orchestrator | 2025-05-19 19:35:14 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:14.966472 | orchestrator | 2025-05-19 19:35:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:18.022653 | orchestrator | 2025-05-19 19:35:18 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:18.022761 | orchestrator | 2025-05-19 19:35:18 | INFO  | Task d7c0ede2-98d9-4007-805c-a108bb3000b2 is in state STARTED 2025-05-19 19:35:18.022889 | orchestrator | 2025-05-19 19:35:18 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:18.022947 | orchestrator | 2025-05-19 19:35:18 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:18.023844 | orchestrator | 2025-05-19 19:35:18 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:18.024359 | orchestrator | 2025-05-19 19:35:18 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:18.024914 | orchestrator | 2025-05-19 19:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:21.067452 | orchestrator | 2025-05-19 19:35:21 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:21.067571 | orchestrator | 2025-05-19 19:35:21 | INFO  | Task d7c0ede2-98d9-4007-805c-a108bb3000b2 is in state STARTED 2025-05-19 19:35:21.067588 | orchestrator | 2025-05-19 19:35:21 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:21.067601 | orchestrator | 2025-05-19 19:35:21 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:21.067634 | orchestrator | 2025-05-19 19:35:21 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:21.067646 | orchestrator | 2025-05-19 19:35:21 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:21.067659 | orchestrator | 2025-05-19 19:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:24.119595 | orchestrator | 2025-05-19 19:35:24 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:24.119705 | orchestrator | 2025-05-19 19:35:24 | INFO  | Task d7c0ede2-98d9-4007-805c-a108bb3000b2 is in state STARTED 2025-05-19 19:35:24.119717 | orchestrator | 2025-05-19 19:35:24 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:24.119727 | orchestrator | 2025-05-19 19:35:24 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:24.123456 | orchestrator | 2025-05-19 19:35:24 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:24.123493 | orchestrator | 2025-05-19 19:35:24 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:24.123506 | orchestrator | 2025-05-19 19:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:27.184001 | orchestrator | 2025-05-19 19:35:27 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:27.185210 | orchestrator | 2025-05-19 19:35:27 | INFO  | Task d7c0ede2-98d9-4007-805c-a108bb3000b2 is in state STARTED 2025-05-19 19:35:27.190368 | orchestrator | 2025-05-19 19:35:27 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:27.190406 | orchestrator | 2025-05-19 19:35:27 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:27.193182 | orchestrator | 2025-05-19 19:35:27 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:27.199464 | orchestrator | 2025-05-19 19:35:27 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:27.199530 | orchestrator | 2025-05-19 19:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:30.260112 | orchestrator | 2025-05-19 19:35:30.260263 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-19 19:35:30.260279 | orchestrator | 2025-05-19 19:35:30.260291 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-19 19:35:30.260303 | orchestrator | Monday 19 May 2025 19:35:18 +0000 (0:00:00.194) 0:00:00.194 ************ 2025-05-19 19:35:30.260314 | orchestrator | changed: [testbed-manager] 2025-05-19 19:35:30.260340 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:35:30.260362 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:35:30.260374 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:35:30.260384 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:35:30.260395 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:35:30.260406 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:35:30.260417 | orchestrator | 2025-05-19 19:35:30.260429 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-19 19:35:30.260440 | orchestrator | Monday 19 May 2025 19:35:22 +0000 (0:00:03.229) 0:00:03.424 ************ 2025-05-19 19:35:30.260453 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-19 19:35:30.260503 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 19:35:30.260516 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 19:35:30.260527 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 19:35:30.260538 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 19:35:30.260549 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 19:35:30.260560 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 19:35:30.260571 | orchestrator | 2025-05-19 19:35:30.260582 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-19 19:35:30.260594 | orchestrator | Monday 19 May 2025 19:35:24 +0000 (0:00:02.434) 0:00:05.859 ************ 2025-05-19 19:35:30.260609 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:23.080544', 'end': '2025-05-19 19:35:23.087953', 'delta': '0:00:00.007409', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260633 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:22.999807', 'end': '2025-05-19 19:35:23.003930', 'delta': '0:00:00.004123', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260669 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:23.266456', 'end': '2025-05-19 19:35:23.274179', 'delta': '0:00:00.007723', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260743 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:23.508501', 'end': '2025-05-19 19:35:23.517080', 'delta': '0:00:00.008579', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260759 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:23.823224', 'end': '2025-05-19 19:35:23.832007', 'delta': '0:00:00.008783', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260771 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:24.188699', 'end': '2025-05-19 19:35:24.195624', 'delta': '0:00:00.006925', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260782 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 19:35:24.223295', 'end': '2025-05-19 19:35:24.230341', 'delta': '0:00:00.007046', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 19:35:30.260803 | orchestrator | 2025-05-19 19:35:30.260814 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-19 19:35:30.260825 | orchestrator | Monday 19 May 2025 19:35:27 +0000 (0:00:02.531) 0:00:08.390 ************ 2025-05-19 19:35:30.260836 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-19 19:35:30.260847 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 19:35:30.260858 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 19:35:30.260869 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 19:35:30.260879 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 19:35:30.260890 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 19:35:30.260901 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 19:35:30.260911 | orchestrator | 2025-05-19 19:35:30.260922 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:35:30.260934 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.260947 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.260958 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.260981 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.260993 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.261003 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.261014 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:35:30.261025 | orchestrator | 2025-05-19 19:35:30.261036 | orchestrator | Monday 19 May 2025 19:35:29 +0000 (0:00:02.456) 0:00:10.846 ************ 2025-05-19 19:35:30.261047 | orchestrator | =============================================================================== 2025-05-19 19:35:30.261057 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.23s 2025-05-19 19:35:30.261069 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.53s 2025-05-19 19:35:30.261079 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.46s 2025-05-19 19:35:30.261090 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.43s 2025-05-19 19:35:30.261173 | orchestrator | 2025-05-19 19:35:30 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:30.261189 | orchestrator | 2025-05-19 19:35:30 | INFO  | Task d7c0ede2-98d9-4007-805c-a108bb3000b2 is in state SUCCESS 2025-05-19 19:35:30.261200 | orchestrator | 2025-05-19 19:35:30 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:30.261307 | orchestrator | 2025-05-19 19:35:30 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:30.261322 | orchestrator | 2025-05-19 19:35:30 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:30.264595 | orchestrator | 2025-05-19 19:35:30 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:30.264632 | orchestrator | 2025-05-19 19:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:33.316549 | orchestrator | 2025-05-19 19:35:33 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:33.316671 | orchestrator | 2025-05-19 19:35:33 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:33.319214 | orchestrator | 2025-05-19 19:35:33 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:33.319261 | orchestrator | 2025-05-19 19:35:33 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:33.319855 | orchestrator | 2025-05-19 19:35:33 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:33.320162 | orchestrator | 2025-05-19 19:35:33 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:33.320194 | orchestrator | 2025-05-19 19:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:36.387212 | orchestrator | 2025-05-19 19:35:36 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:36.389974 | orchestrator | 2025-05-19 19:35:36 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:36.395058 | orchestrator | 2025-05-19 19:35:36 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:36.402336 | orchestrator | 2025-05-19 19:35:36 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:36.410972 | orchestrator | 2025-05-19 19:35:36 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:36.418214 | orchestrator | 2025-05-19 19:35:36 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:36.418278 | orchestrator | 2025-05-19 19:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:39.528547 | orchestrator | 2025-05-19 19:35:39 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:39.528647 | orchestrator | 2025-05-19 19:35:39 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:39.528660 | orchestrator | 2025-05-19 19:35:39 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:39.537178 | orchestrator | 2025-05-19 19:35:39 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:39.537716 | orchestrator | 2025-05-19 19:35:39 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:39.546388 | orchestrator | 2025-05-19 19:35:39 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:39.546428 | orchestrator | 2025-05-19 19:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:42.636878 | orchestrator | 2025-05-19 19:35:42 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:42.637880 | orchestrator | 2025-05-19 19:35:42 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:42.641002 | orchestrator | 2025-05-19 19:35:42 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:42.642959 | orchestrator | 2025-05-19 19:35:42 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:42.645539 | orchestrator | 2025-05-19 19:35:42 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:42.646801 | orchestrator | 2025-05-19 19:35:42 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:42.647249 | orchestrator | 2025-05-19 19:35:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:45.746893 | orchestrator | 2025-05-19 19:35:45 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:45.753552 | orchestrator | 2025-05-19 19:35:45 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:45.753616 | orchestrator | 2025-05-19 19:35:45 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:45.753622 | orchestrator | 2025-05-19 19:35:45 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:45.755037 | orchestrator | 2025-05-19 19:35:45 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:45.756700 | orchestrator | 2025-05-19 19:35:45 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:45.757468 | orchestrator | 2025-05-19 19:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:48.808965 | orchestrator | 2025-05-19 19:35:48 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:48.809093 | orchestrator | 2025-05-19 19:35:48 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:48.809117 | orchestrator | 2025-05-19 19:35:48 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:48.811372 | orchestrator | 2025-05-19 19:35:48 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:48.815779 | orchestrator | 2025-05-19 19:35:48 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:48.816324 | orchestrator | 2025-05-19 19:35:48 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:48.816363 | orchestrator | 2025-05-19 19:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:51.863530 | orchestrator | 2025-05-19 19:35:51 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:51.868492 | orchestrator | 2025-05-19 19:35:51 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:51.870468 | orchestrator | 2025-05-19 19:35:51 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:51.873347 | orchestrator | 2025-05-19 19:35:51 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:51.878878 | orchestrator | 2025-05-19 19:35:51 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state STARTED 2025-05-19 19:35:51.882386 | orchestrator | 2025-05-19 19:35:51 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:51.882423 | orchestrator | 2025-05-19 19:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:54.932904 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:54.934809 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:54.936607 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:54.939183 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:35:54.940829 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:54.941294 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task 1ce41c7b-07f9-4bc3-a108-0d006869ccf8 is in state SUCCESS 2025-05-19 19:35:54.944082 | orchestrator | 2025-05-19 19:35:54 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:54.944193 | orchestrator | 2025-05-19 19:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:35:58.013255 | orchestrator | 2025-05-19 19:35:58 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:35:58.014249 | orchestrator | 2025-05-19 19:35:58 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:35:58.019341 | orchestrator | 2025-05-19 19:35:58 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:35:58.019661 | orchestrator | 2025-05-19 19:35:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:35:58.020483 | orchestrator | 2025-05-19 19:35:58 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:35:58.021006 | orchestrator | 2025-05-19 19:35:58 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:35:58.021076 | orchestrator | 2025-05-19 19:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:01.059665 | orchestrator | 2025-05-19 19:36:01 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:01.059777 | orchestrator | 2025-05-19 19:36:01 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:36:01.062767 | orchestrator | 2025-05-19 19:36:01 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:01.062798 | orchestrator | 2025-05-19 19:36:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:01.062810 | orchestrator | 2025-05-19 19:36:01 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:01.062822 | orchestrator | 2025-05-19 19:36:01 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:01.062834 | orchestrator | 2025-05-19 19:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:04.116510 | orchestrator | 2025-05-19 19:36:04 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:04.119148 | orchestrator | 2025-05-19 19:36:04 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:36:04.124055 | orchestrator | 2025-05-19 19:36:04 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:04.124093 | orchestrator | 2025-05-19 19:36:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:04.124100 | orchestrator | 2025-05-19 19:36:04 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:04.124107 | orchestrator | 2025-05-19 19:36:04 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:04.124393 | orchestrator | 2025-05-19 19:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:07.171589 | orchestrator | 2025-05-19 19:36:07 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:07.174432 | orchestrator | 2025-05-19 19:36:07 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:36:07.174501 | orchestrator | 2025-05-19 19:36:07 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:07.174515 | orchestrator | 2025-05-19 19:36:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:07.175457 | orchestrator | 2025-05-19 19:36:07 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:07.175911 | orchestrator | 2025-05-19 19:36:07 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:07.175980 | orchestrator | 2025-05-19 19:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:10.241630 | orchestrator | 2025-05-19 19:36:10 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:10.248738 | orchestrator | 2025-05-19 19:36:10 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state STARTED 2025-05-19 19:36:10.248795 | orchestrator | 2025-05-19 19:36:10 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:10.252917 | orchestrator | 2025-05-19 19:36:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:10.257620 | orchestrator | 2025-05-19 19:36:10 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:10.257707 | orchestrator | 2025-05-19 19:36:10 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:10.257724 | orchestrator | 2025-05-19 19:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:13.319783 | orchestrator | 2025-05-19 19:36:13 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:13.319890 | orchestrator | 2025-05-19 19:36:13 | INFO  | Task b0968893-e7f4-4c05-8c17-4166ab42cdea is in state SUCCESS 2025-05-19 19:36:13.320243 | orchestrator | 2025-05-19 19:36:13 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:13.320553 | orchestrator | 2025-05-19 19:36:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:13.320980 | orchestrator | 2025-05-19 19:36:13 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:13.321509 | orchestrator | 2025-05-19 19:36:13 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:13.321533 | orchestrator | 2025-05-19 19:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:16.353254 | orchestrator | 2025-05-19 19:36:16 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:16.353430 | orchestrator | 2025-05-19 19:36:16 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:16.353800 | orchestrator | 2025-05-19 19:36:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:16.354200 | orchestrator | 2025-05-19 19:36:16 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:16.354975 | orchestrator | 2025-05-19 19:36:16 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:16.354985 | orchestrator | 2025-05-19 19:36:16 | INFO  | [1mWait 1 second(s) until the next check 2025-05-19 19:36:19.408838 | orchestrator | 2025-05-19 19:36:19 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:19.408974 | orchestrator | 2025-05-19 19:36:19 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:19.411109 | orchestrator | 2025-05-19 19:36:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:19.411252 | orchestrator | 2025-05-19 19:36:19 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:19.411266 | orchestrator | 2025-05-19 19:36:19 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:19.411277 | orchestrator | 2025-05-19 19:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:22.463215 | orchestrator | 2025-05-19 19:36:22 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state STARTED 2025-05-19 19:36:22.463453 | orchestrator | 2025-05-19 19:36:22 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:22.467421 | orchestrator | 2025-05-19 19:36:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:22.467492 | orchestrator | 2025-05-19 19:36:22 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:22.467505 | orchestrator | 2025-05-19 19:36:22 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:22.467517 | orchestrator | 2025-05-19 19:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:25.517819 | orchestrator | 2025-05-19 19:36:25.517903 | orchestrator | 2025-05-19 19:36:25.517912 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-19 19:36:25.517920 | orchestrator | 2025-05-19 19:36:25.517927 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-19 19:36:25.517935 | orchestrator | Monday 19 May 2025 19:35:19 +0000 (0:00:00.376) 0:00:00.376 ************ 2025-05-19 19:36:25.517941 | orchestrator | ok: [testbed-manager] => { 2025-05-19 19:36:25.517950 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-19 19:36:25.517958 | orchestrator | } 2025-05-19 19:36:25.517965 | orchestrator | 2025-05-19 19:36:25.517972 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-19 19:36:25.517978 | orchestrator | Monday 19 May 2025 19:35:19 +0000 (0:00:00.422) 0:00:00.798 ************ 2025-05-19 19:36:25.517984 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.517992 | orchestrator | 2025-05-19 19:36:25.517998 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-19 19:36:25.518004 | orchestrator | Monday 19 May 2025 19:35:20 +0000 (0:00:00.913) 0:00:01.711 ************ 2025-05-19 19:36:25.518011 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-19 19:36:25.518075 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-19 19:36:25.518082 | orchestrator | 2025-05-19 19:36:25.518089 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-19 19:36:25.518095 | orchestrator | Monday 19 May 2025 19:35:21 +0000 (0:00:00.921) 0:00:02.633 ************ 2025-05-19 19:36:25.518101 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518108 | orchestrator | 2025-05-19 19:36:25.518114 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-19 19:36:25.518139 | orchestrator | Monday 19 May 2025 19:35:24 +0000 (0:00:02.762) 0:00:05.396 ************ 2025-05-19 19:36:25.518146 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518152 | orchestrator | 2025-05-19 19:36:25.518159 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-19 19:36:25.518165 | orchestrator | Monday 19 May 2025 19:35:25 +0000 (0:00:01.751) 0:00:07.148 ************ 2025-05-19 19:36:25.518172 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-19 19:36:25.518178 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.518184 | orchestrator | 2025-05-19 19:36:25.518191 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-19 19:36:25.518197 | orchestrator | Monday 19 May 2025 19:35:50 +0000 (0:00:24.415) 0:00:31.564 ************ 2025-05-19 19:36:25.518219 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518226 | orchestrator | 2025-05-19 19:36:25.518232 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:36:25.518239 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.518247 | orchestrator | 2025-05-19 19:36:25.518254 | orchestrator | Monday 19 May 2025 19:35:52 +0000 (0:00:01.919) 0:00:33.483 ************ 2025-05-19 19:36:25.518260 | orchestrator | =============================================================================== 2025-05-19 19:36:25.518266 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.41s 2025-05-19 19:36:25.518272 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.76s 2025-05-19 19:36:25.518278 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.92s 2025-05-19 19:36:25.518288 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.75s 2025-05-19 19:36:25.518295 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.92s 2025-05-19 19:36:25.518301 | orchestrator | osism.services.homer : Create traefik external network ------------------ 0.91s 2025-05-19 19:36:25.518307 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.42s 2025-05-19 19:36:25.518313 | orchestrator | 2025-05-19 19:36:25.518319 | orchestrator | 2025-05-19 19:36:25.518326 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-19 19:36:25.518332 | orchestrator | 2025-05-19 19:36:25.518339 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-19 19:36:25.518345 | orchestrator | Monday 19 May 2025 19:35:18 +0000 (0:00:00.322) 0:00:00.322 ************ 2025-05-19 19:36:25.518352 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-19 19:36:25.518359 | orchestrator | 2025-05-19 19:36:25.518366 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-19 19:36:25.518372 | orchestrator | Monday 19 May 2025 19:35:18 +0000 (0:00:00.251) 0:00:00.573 ************ 2025-05-19 19:36:25.518378 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-19 19:36:25.518385 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-19 19:36:25.518392 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-19 19:36:25.518399 | orchestrator | 2025-05-19 19:36:25.518406 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-19 19:36:25.518413 | orchestrator | Monday 19 May 2025 19:35:19 +0000 (0:00:01.157) 0:00:01.730 ************ 2025-05-19 19:36:25.518420 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518428 | orchestrator | 2025-05-19 19:36:25.518434 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-19 19:36:25.518442 | orchestrator | Monday 19 May 2025 19:35:21 +0000 (0:00:01.119) 0:00:02.850 ************ 2025-05-19 19:36:25.518449 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-19 19:36:25.518456 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.518464 | orchestrator | 2025-05-19 19:36:25.518483 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-19 19:36:25.518490 | orchestrator | Monday 19 May 2025 19:36:03 +0000 (0:00:42.738) 0:00:45.588 ************ 2025-05-19 19:36:25.518497 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518504 | orchestrator | 2025-05-19 19:36:25.518512 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-19 19:36:25.518518 | orchestrator | Monday 19 May 2025 19:36:05 +0000 (0:00:01.819) 0:00:47.408 ************ 2025-05-19 19:36:25.518525 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.518532 | orchestrator | 2025-05-19 19:36:25.518539 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-19 19:36:25.518551 | orchestrator | Monday 19 May 2025 19:36:06 +0000 (0:00:01.242) 0:00:48.651 ************ 2025-05-19 19:36:25.518557 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518564 | orchestrator | 2025-05-19 19:36:25.518572 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-19 19:36:25.518579 | orchestrator | Monday 19 May 2025 19:36:08 +0000 (0:00:01.993) 0:00:50.644 ************ 2025-05-19 19:36:25.518586 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518593 | orchestrator | 2025-05-19 19:36:25.518599 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-19 19:36:25.518607 | orchestrator | Monday 19 May 2025 19:36:09 +0000 (0:00:01.081) 0:00:51.726 ************ 2025-05-19 19:36:25.518613 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.518620 | orchestrator | 2025-05-19 19:36:25.518627 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-19 19:36:25.518634 | orchestrator | Monday 19 May 2025 19:36:10 +0000 (0:00:00.892) 0:00:52.618 ************ 2025-05-19 19:36:25.518641 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.518648 | orchestrator | 2025-05-19 19:36:25.518655 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:36:25.518662 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.518669 | orchestrator | 2025-05-19 19:36:25.518676 | orchestrator | Monday 19 May 2025 19:36:11 +0000 (0:00:00.429) 0:00:53.047 ************ 2025-05-19 19:36:25.518683 | orchestrator | =============================================================================== 2025-05-19 19:36:25.518690 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 42.74s 2025-05-19 19:36:25.518697 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.99s 2025-05-19 19:36:25.518704 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.82s 2025-05-19 19:36:25.518711 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.24s 2025-05-19 19:36:25.518719 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.16s 2025-05-19 19:36:25.518726 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.12s 2025-05-19 19:36:25.518732 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.08s 2025-05-19 19:36:25.518739 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.89s 2025-05-19 19:36:25.518746 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2025-05-19 19:36:25.518752 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2025-05-19 19:36:25.518759 | orchestrator | 2025-05-19 19:36:25.518765 | orchestrator | 2025-05-19 19:36:25.518771 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:36:25.518777 | orchestrator | 2025-05-19 19:36:25.518786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:36:25.518792 | orchestrator | Monday 19 May 2025 19:35:18 +0000 (0:00:00.204) 0:00:00.204 ************ 2025-05-19 19:36:25.518798 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-19 19:36:25.518804 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-19 19:36:25.518810 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-19 19:36:25.518816 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-19 19:36:25.518823 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-19 19:36:25.518828 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-19 19:36:25.518835 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-19 19:36:25.518841 | orchestrator | 2025-05-19 19:36:25.518847 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-19 19:36:25.518853 | orchestrator | 2025-05-19 19:36:25.518862 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-19 19:36:25.518869 | orchestrator | Monday 19 May 2025 19:35:20 +0000 (0:00:01.347) 0:00:01.552 ************ 2025-05-19 19:36:25.518885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:36:25.518893 | orchestrator | 2025-05-19 19:36:25.518899 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-19 19:36:25.518905 | orchestrator | Monday 19 May 2025 19:35:21 +0000 (0:00:01.520) 0:00:03.073 ************ 2025-05-19 19:36:25.518912 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.518918 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:36:25.518924 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:36:25.518930 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:36:25.518936 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:36:25.518942 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:36:25.518948 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:36:25.518954 | orchestrator | 2025-05-19 19:36:25.518960 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-19 19:36:25.518970 | orchestrator | Monday 19 May 2025 19:35:24 +0000 (0:00:02.847) 0:00:05.921 ************ 2025-05-19 19:36:25.518977 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.518983 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:36:25.518989 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:36:25.518995 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:36:25.519001 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:36:25.519007 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:36:25.519013 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:36:25.519019 | orchestrator | 2025-05-19 19:36:25.519025 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-19 19:36:25.519031 | orchestrator | Monday 19 May 2025 19:35:27 +0000 (0:00:03.323) 0:00:09.245 ************ 2025-05-19 19:36:25.519038 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.519044 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:36:25.519050 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:36:25.519056 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:36:25.519062 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:36:25.519068 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:36:25.519074 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:36:25.519080 | orchestrator | 2025-05-19 19:36:25.519086 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-19 19:36:25.519093 | orchestrator | Monday 19 May 2025 19:35:30 +0000 (0:00:02.538) 0:00:11.783 ************ 2025-05-19 19:36:25.519099 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:36:25.519105 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:36:25.519111 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:36:25.519117 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:36:25.519165 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:36:25.519172 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:36:25.519178 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.519184 | orchestrator | 2025-05-19 19:36:25.519190 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-19 19:36:25.519196 | orchestrator | Monday 19 May 2025 19:35:40 +0000 (0:00:09.507) 0:00:21.290 ************ 2025-05-19 19:36:25.519202 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:36:25.519208 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:36:25.519214 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:36:25.519220 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:36:25.519226 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:36:25.519232 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:36:25.519238 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.519244 | orchestrator | 2025-05-19 19:36:25.519251 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-19 19:36:25.519262 | orchestrator | Monday 19 May 2025 19:35:57 +0000 (0:00:17.630) 0:00:38.920 ************ 2025-05-19 19:36:25.519269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:36:25.519277 | orchestrator | 2025-05-19 19:36:25.519283 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-19 19:36:25.519289 | orchestrator | Monday 19 May 2025 19:35:59 +0000 (0:00:01.513) 0:00:40.434 ************ 2025-05-19 19:36:25.519295 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-19 19:36:25.519302 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-19 19:36:25.519308 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-19 19:36:25.519314 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-19 19:36:25.519320 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-19 19:36:25.519326 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-19 19:36:25.519339 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-19 19:36:25.519345 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-19 19:36:25.519351 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-19 19:36:25.519357 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-19 19:36:25.519366 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-19 19:36:25.519374 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-19 19:36:25.519384 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-19 19:36:25.519394 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-19 19:36:25.519403 | orchestrator | 2025-05-19 19:36:25.519409 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-19 19:36:25.519416 | orchestrator | Monday 19 May 2025 19:36:04 +0000 (0:00:05.660) 0:00:46.095 ************ 2025-05-19 19:36:25.519422 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.519428 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:36:25.519434 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:36:25.519440 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:36:25.519446 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:36:25.519452 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:36:25.519458 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:36:25.519464 | orchestrator | 2025-05-19 19:36:25.519470 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-19 19:36:25.519476 | orchestrator | Monday 19 May 2025 19:36:07 +0000 (0:00:02.328) 0:00:48.424 ************ 2025-05-19 19:36:25.519482 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.519488 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:36:25.519494 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:36:25.519500 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:36:25.519506 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:36:25.519512 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:36:25.519518 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:36:25.519524 | orchestrator | 2025-05-19 19:36:25.519530 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-19 19:36:25.519537 | orchestrator | Monday 19 May 2025 19:36:09 +0000 (0:00:02.062) 0:00:50.486 ************ 2025-05-19 19:36:25.519543 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.519549 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:36:25.519555 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:36:25.519561 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:36:25.519571 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:36:25.519577 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:36:25.519583 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:36:25.519589 | orchestrator | 2025-05-19 19:36:25.519595 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-19 19:36:25.519605 | orchestrator | Monday 19 May 2025 19:36:11 +0000 (0:00:02.687) 0:00:53.173 ************ 2025-05-19 19:36:25.519612 | orchestrator | ok: [testbed-manager] 2025-05-19 19:36:25.519617 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:36:25.519623 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:36:25.519629 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:36:25.519635 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:36:25.519641 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:36:25.519647 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:36:25.519653 | orchestrator | 2025-05-19 19:36:25.519659 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-19 19:36:25.519665 | orchestrator | Monday 19 May 2025 19:36:14 +0000 (0:00:02.937) 0:00:56.111 ************ 2025-05-19 19:36:25.519671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-19 19:36:25.519679 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:36:25.519686 | orchestrator | 2025-05-19 19:36:25.519692 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-19 19:36:25.519698 | orchestrator | Monday 19 May 2025 19:36:16 +0000 (0:00:01.479) 0:00:57.590 ************ 2025-05-19 19:36:25.519704 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.519710 | orchestrator | 2025-05-19 19:36:25.519716 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-19 19:36:25.519722 | orchestrator | Monday 19 May 2025 19:36:18 +0000 (0:00:01.922) 0:00:59.512 ************ 2025-05-19 19:36:25.519728 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:36:25.519734 | orchestrator | changed: [testbed-manager] 2025-05-19 19:36:25.519740 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:36:25.519746 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:36:25.519752 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:36:25.519758 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:36:25.519764 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:36:25.519770 | orchestrator | 2025-05-19 19:36:25.519776 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:36:25.519783 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519789 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519795 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519801 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519807 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519816 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519823 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:36:25.519829 | orchestrator | 2025-05-19 19:36:25.519835 | orchestrator | Monday 19 May 2025 19:36:21 +0000 (0:00:03.688) 0:01:03.201 ************ 2025-05-19 19:36:25.519841 | orchestrator | =============================================================================== 2025-05-19 19:36:25.519847 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.63s 2025-05-19 19:36:25.519853 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.51s 2025-05-19 19:36:25.519863 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.66s 2025-05-19 19:36:25.519869 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.69s 2025-05-19 19:36:25.519875 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.32s 2025-05-19 19:36:25.519882 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.94s 2025-05-19 19:36:25.519888 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.85s 2025-05-19 19:36:25.519894 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.68s 2025-05-19 19:36:25.519900 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.54s 2025-05-19 19:36:25.519906 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.33s 2025-05-19 19:36:25.519912 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.07s 2025-05-19 19:36:25.519918 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.92s 2025-05-19 19:36:25.519924 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.52s 2025-05-19 19:36:25.519930 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.51s 2025-05-19 19:36:25.519940 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.48s 2025-05-19 19:36:25.519947 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2025-05-19 19:36:25.519974 | orchestrator | 2025-05-19 19:36:25 | INFO  | Task f47fdd9d-5753-4064-a239-12a0b27acfb3 is in state SUCCESS 2025-05-19 19:36:25.519982 | orchestrator | 2025-05-19 19:36:25 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:25.519988 | orchestrator | 2025-05-19 19:36:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:25.519995 | orchestrator | 2025-05-19 19:36:25 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:25.520001 | orchestrator | 2025-05-19 19:36:25 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:25.520007 | orchestrator | 2025-05-19 19:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:28.551858 | orchestrator | 2025-05-19 19:36:28 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:28.552595 | orchestrator | 2025-05-19 19:36:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:28.552791 | orchestrator | 2025-05-19 19:36:28 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:28.553658 | orchestrator | 2025-05-19 19:36:28 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:28.553686 | orchestrator | 2025-05-19 19:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:31.600554 | orchestrator | 2025-05-19 19:36:31 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:31.600779 | orchestrator | 2025-05-19 19:36:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:31.604376 | orchestrator | 2025-05-19 19:36:31 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:31.604702 | orchestrator | 2025-05-19 19:36:31 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:31.604725 | orchestrator | 2025-05-19 19:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:34.645747 | orchestrator | 2025-05-19 19:36:34 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:34.646966 | orchestrator | 2025-05-19 19:36:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:34.647964 | orchestrator | 2025-05-19 19:36:34 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:34.648660 | orchestrator | 2025-05-19 19:36:34 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:34.648705 | orchestrator | 2025-05-19 19:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:37.683596 | orchestrator | 2025-05-19 19:36:37 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:37.683712 | orchestrator | 2025-05-19 19:36:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:37.683808 | orchestrator | 2025-05-19 19:36:37 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:37.684042 | orchestrator | 2025-05-19 19:36:37 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:37.684064 | orchestrator | 2025-05-19 19:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:40.716618 | orchestrator | 2025-05-19 19:36:40 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:40.717316 | orchestrator | 2025-05-19 19:36:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:40.718187 | orchestrator | 2025-05-19 19:36:40 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:40.718328 | orchestrator | 2025-05-19 19:36:40 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:40.718411 | orchestrator | 2025-05-19 19:36:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:43.757612 | orchestrator | 2025-05-19 19:36:43 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:43.757844 | orchestrator | 2025-05-19 19:36:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:43.758832 | orchestrator | 2025-05-19 19:36:43 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:43.759423 | orchestrator | 2025-05-19 19:36:43 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:43.759447 | orchestrator | 2025-05-19 19:36:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:46.811573 | orchestrator | 2025-05-19 19:36:46 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:46.811819 | orchestrator | 2025-05-19 19:36:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:46.811845 | orchestrator | 2025-05-19 19:36:46 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:46.812446 | orchestrator | 2025-05-19 19:36:46 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:46.812665 | orchestrator | 2025-05-19 19:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:49.846956 | orchestrator | 2025-05-19 19:36:49 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:49.847464 | orchestrator | 2025-05-19 19:36:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:49.848361 | orchestrator | 2025-05-19 19:36:49 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:49.849673 | orchestrator | 2025-05-19 19:36:49 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:49.849751 | orchestrator | 2025-05-19 19:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:52.904835 | orchestrator | 2025-05-19 19:36:52 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:52.904941 | orchestrator | 2025-05-19 19:36:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:52.905451 | orchestrator | 2025-05-19 19:36:52 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:52.906230 | orchestrator | 2025-05-19 19:36:52 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:52.906247 | orchestrator | 2025-05-19 19:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:55.956511 | orchestrator | 2025-05-19 19:36:55 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:55.956581 | orchestrator | 2025-05-19 19:36:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:55.956587 | orchestrator | 2025-05-19 19:36:55 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:55.956642 | orchestrator | 2025-05-19 19:36:55 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:55.957059 | orchestrator | 2025-05-19 19:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:36:59.026589 | orchestrator | 2025-05-19 19:36:59 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:36:59.026713 | orchestrator | 2025-05-19 19:36:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:36:59.026720 | orchestrator | 2025-05-19 19:36:59 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:36:59.026759 | orchestrator | 2025-05-19 19:36:59 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state STARTED 2025-05-19 19:36:59.026766 | orchestrator | 2025-05-19 19:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:02.077767 | orchestrator | 2025-05-19 19:37:02 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:02.080611 | orchestrator | 2025-05-19 19:37:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:02.081494 | orchestrator | 2025-05-19 19:37:02 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:02.083070 | orchestrator | 2025-05-19 19:37:02 | INFO  | Task 07aae83c-88cb-43ec-92cf-f6cc8c4e9f6f is in state SUCCESS 2025-05-19 19:37:02.083105 | orchestrator | 2025-05-19 19:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:05.121097 | orchestrator | 2025-05-19 19:37:05 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:05.122179 | orchestrator | 2025-05-19 19:37:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:05.125948 | orchestrator | 2025-05-19 19:37:05 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:05.126364 | orchestrator | 2025-05-19 19:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:08.161242 | orchestrator | 2025-05-19 19:37:08 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:08.164190 | orchestrator | 2025-05-19 19:37:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:08.166194 | orchestrator | 2025-05-19 19:37:08 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:08.166219 | orchestrator | 2025-05-19 19:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:11.206185 | orchestrator | 2025-05-19 19:37:11 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:11.206362 | orchestrator | 2025-05-19 19:37:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:11.209837 | orchestrator | 2025-05-19 19:37:11 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:11.209949 | orchestrator | 2025-05-19 19:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:14.255184 | orchestrator | 2025-05-19 19:37:14 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:14.257645 | orchestrator | 2025-05-19 19:37:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:14.260743 | orchestrator | 2025-05-19 19:37:14 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:14.260810 | orchestrator | 2025-05-19 19:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:17.320861 | orchestrator | 2025-05-19 19:37:17 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:17.321595 | orchestrator | 2025-05-19 19:37:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:17.323184 | orchestrator | 2025-05-19 19:37:17 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:17.323263 | orchestrator | 2025-05-19 19:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:20.379278 | orchestrator | 2025-05-19 19:37:20 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:20.380582 | orchestrator | 2025-05-19 19:37:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:20.382176 | orchestrator | 2025-05-19 19:37:20 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:20.382321 | orchestrator | 2025-05-19 19:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:23.433537 | orchestrator | 2025-05-19 19:37:23 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:23.433652 | orchestrator | 2025-05-19 19:37:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:23.434266 | orchestrator | 2025-05-19 19:37:23 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:23.434324 | orchestrator | 2025-05-19 19:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:26.499348 | orchestrator | 2025-05-19 19:37:26 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:26.501148 | orchestrator | 2025-05-19 19:37:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:26.501821 | orchestrator | 2025-05-19 19:37:26 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:26.501971 | orchestrator | 2025-05-19 19:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:29.566616 | orchestrator | 2025-05-19 19:37:29 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:29.567377 | orchestrator | 2025-05-19 19:37:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:29.571467 | orchestrator | 2025-05-19 19:37:29 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:29.571524 | orchestrator | 2025-05-19 19:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:32.630889 | orchestrator | 2025-05-19 19:37:32 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:32.632272 | orchestrator | 2025-05-19 19:37:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:32.633976 | orchestrator | 2025-05-19 19:37:32 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:32.634071 | orchestrator | 2025-05-19 19:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:35.696461 | orchestrator | 2025-05-19 19:37:35 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state STARTED 2025-05-19 19:37:35.699226 | orchestrator | 2025-05-19 19:37:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:35.703768 | orchestrator | 2025-05-19 19:37:35 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:35.703969 | orchestrator | 2025-05-19 19:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:38.750932 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:38.751045 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:38.753736 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task a0fd9701-44be-4eca-996e-ece26a8e7e62 is in state SUCCESS 2025-05-19 19:37:38.757528 | orchestrator | 2025-05-19 19:37:38.757582 | orchestrator | 2025-05-19 19:37:38.757595 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-19 19:37:38.757608 | orchestrator | 2025-05-19 19:37:38.757649 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-19 19:37:38.757661 | orchestrator | Monday 19 May 2025 19:35:34 +0000 (0:00:00.233) 0:00:00.233 ************ 2025-05-19 19:37:38.757673 | orchestrator | ok: [testbed-manager] 2025-05-19 19:37:38.757686 | orchestrator | 2025-05-19 19:37:38.757698 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-19 19:37:38.757709 | orchestrator | Monday 19 May 2025 19:35:35 +0000 (0:00:01.162) 0:00:01.396 ************ 2025-05-19 19:37:38.757720 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-19 19:37:38.757732 | orchestrator | 2025-05-19 19:37:38.757743 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-19 19:37:38.757754 | orchestrator | Monday 19 May 2025 19:35:36 +0000 (0:00:00.738) 0:00:02.134 ************ 2025-05-19 19:37:38.757765 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.757776 | orchestrator | 2025-05-19 19:37:38.757787 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-19 19:37:38.757798 | orchestrator | Monday 19 May 2025 19:35:38 +0000 (0:00:02.191) 0:00:04.326 ************ 2025-05-19 19:37:38.757809 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-19 19:37:38.757821 | orchestrator | ok: [testbed-manager] 2025-05-19 19:37:38.757832 | orchestrator | 2025-05-19 19:37:38.757843 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-19 19:37:38.757854 | orchestrator | Monday 19 May 2025 19:36:57 +0000 (0:01:18.840) 0:01:23.167 ************ 2025-05-19 19:37:38.757864 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.757875 | orchestrator | 2025-05-19 19:37:38.757886 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:37:38.757897 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:37:38.757911 | orchestrator | 2025-05-19 19:37:38.757922 | orchestrator | Monday 19 May 2025 19:37:01 +0000 (0:00:03.899) 0:01:27.067 ************ 2025-05-19 19:37:38.757945 | orchestrator | =============================================================================== 2025-05-19 19:37:38.757971 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 78.84s 2025-05-19 19:37:38.757990 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.90s 2025-05-19 19:37:38.758076 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 2.19s 2025-05-19 19:37:38.758090 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.16s 2025-05-19 19:37:38.758103 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.74s 2025-05-19 19:37:38.758190 | orchestrator | 2025-05-19 19:37:38.758204 | orchestrator | 2025-05-19 19:37:38.758217 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-19 19:37:38.758229 | orchestrator | 2025-05-19 19:37:38.758241 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-19 19:37:38.758253 | orchestrator | Monday 19 May 2025 19:35:14 +0000 (0:00:00.443) 0:00:00.443 ************ 2025-05-19 19:37:38.758266 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:37:38.758280 | orchestrator | 2025-05-19 19:37:38.758293 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-19 19:37:38.758305 | orchestrator | Monday 19 May 2025 19:35:16 +0000 (0:00:01.890) 0:00:02.333 ************ 2025-05-19 19:37:38.758317 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758329 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758342 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758354 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758367 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758379 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758392 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758405 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758418 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758429 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758440 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758450 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758461 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 19:37:38.758472 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758482 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758493 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758504 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758527 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 19:37:38.758539 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758549 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758560 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 19:37:38.758571 | orchestrator | 2025-05-19 19:37:38.758582 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-19 19:37:38.758592 | orchestrator | Monday 19 May 2025 19:35:19 +0000 (0:00:03.755) 0:00:06.089 ************ 2025-05-19 19:37:38.758603 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:37:38.758626 | orchestrator | 2025-05-19 19:37:38.758637 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-19 19:37:38.758647 | orchestrator | Monday 19 May 2025 19:35:21 +0000 (0:00:01.540) 0:00:07.629 ************ 2025-05-19 19:37:38.758663 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758742 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.758848 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.758994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.759006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.759024 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.759036 | orchestrator | 2025-05-19 19:37:38.759047 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-19 19:37:38.759067 | orchestrator | Monday 19 May 2025 19:35:27 +0000 (0:00:05.640) 0:00:13.270 ************ 2025-05-19 19:37:38.759086 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759099 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759159 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:37:38.759186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759245 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:37:38.759257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759316 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:37:38.759327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759409 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:37:38.759424 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:37:38.759451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759507 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:37:38.759534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759602 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:37:38.759614 | orchestrator | 2025-05-19 19:37:38.759625 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-19 19:37:38.759636 | orchestrator | Monday 19 May 2025 19:35:29 +0000 (0:00:02.138) 0:00:15.409 ************ 2025-05-19 19:37:38.759647 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759665 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759677 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759687 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:37:38.759698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759754 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:37:38.759773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759900 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:37:38.759912 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:37:38.759923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.759966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.759999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.760011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.760022 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:37:38.760033 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:37:38.760044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 19:37:38.760061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.760073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.760091 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:37:38.760102 | orchestrator | 2025-05-19 19:37:38.760190 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-19 19:37:38.760210 | orchestrator | Monday 19 May 2025 19:35:32 +0000 (0:00:02.805) 0:00:18.214 ************ 2025-05-19 19:37:38.760227 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:37:38.760246 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:37:38.760265 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:37:38.760283 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:37:38.760300 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:37:38.760311 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:37:38.760321 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:37:38.760332 | orchestrator | 2025-05-19 19:37:38.760347 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-19 19:37:38.760365 | orchestrator | Monday 19 May 2025 19:35:33 +0000 (0:00:01.045) 0:00:19.260 ************ 2025-05-19 19:37:38.760383 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:37:38.760400 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:37:38.760435 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:37:38.760454 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:37:38.760472 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:37:38.760488 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:37:38.760505 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:37:38.760523 | orchestrator | 2025-05-19 19:37:38.760543 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-19 19:37:38.760564 | orchestrator | Monday 19 May 2025 19:35:34 +0000 (0:00:01.059) 0:00:20.319 ************ 2025-05-19 19:37:38.760615 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:37:38.760633 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.760651 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.760668 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.760686 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.760704 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.760722 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.760740 | orchestrator | 2025-05-19 19:37:38.760759 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-19 19:37:38.760778 | orchestrator | Monday 19 May 2025 19:36:04 +0000 (0:00:30.206) 0:00:50.526 ************ 2025-05-19 19:37:38.760797 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:37:38.760825 | orchestrator | ok: [testbed-manager] 2025-05-19 19:37:38.760837 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:37:38.760848 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:37:38.760859 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:37:38.760869 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:37:38.760880 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:37:38.760891 | orchestrator | 2025-05-19 19:37:38.760903 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-19 19:37:38.760921 | orchestrator | Monday 19 May 2025 19:36:07 +0000 (0:00:03.119) 0:00:53.645 ************ 2025-05-19 19:37:38.760940 | orchestrator | ok: [testbed-manager] 2025-05-19 19:37:38.760958 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:37:38.760976 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:37:38.760995 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:37:38.761012 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:37:38.761028 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:37:38.761039 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:37:38.761050 | orchestrator | 2025-05-19 19:37:38.761061 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-19 19:37:38.761072 | orchestrator | Monday 19 May 2025 19:36:08 +0000 (0:00:01.405) 0:00:55.051 ************ 2025-05-19 19:37:38.761100 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:37:38.761148 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:37:38.761160 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:37:38.761171 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:37:38.761182 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:37:38.761192 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:37:38.761203 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:37:38.761214 | orchestrator | 2025-05-19 19:37:38.761225 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-19 19:37:38.761235 | orchestrator | Monday 19 May 2025 19:36:10 +0000 (0:00:01.289) 0:00:56.340 ************ 2025-05-19 19:37:38.761246 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:37:38.761257 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:37:38.761267 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:37:38.761278 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:37:38.761288 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:37:38.761299 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:37:38.761310 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:37:38.761320 | orchestrator | 2025-05-19 19:37:38.761331 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-19 19:37:38.761342 | orchestrator | Monday 19 May 2025 19:36:11 +0000 (0:00:01.067) 0:00:57.407 ************ 2025-05-19 19:37:38.761354 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761389 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761452 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.761540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761603 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.761670 | orchestrator | 2025-05-19 19:37:38.761681 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-19 19:37:38.761692 | orchestrator | Monday 19 May 2025 19:36:16 +0000 (0:00:05.641) 0:01:03.049 ************ 2025-05-19 19:37:38.761703 | orchestrator | [WARNING]: Skipped 2025-05-19 19:37:38.761721 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-19 19:37:38.761740 | orchestrator | to this access issue: 2025-05-19 19:37:38.761757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-19 19:37:38.761775 | orchestrator | directory 2025-05-19 19:37:38.761793 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:37:38.761812 | orchestrator | 2025-05-19 19:37:38.761831 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-19 19:37:38.761849 | orchestrator | Monday 19 May 2025 19:36:17 +0000 (0:00:00.929) 0:01:03.978 ************ 2025-05-19 19:37:38.761867 | orchestrator | [WARNING]: Skipped 2025-05-19 19:37:38.761885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-19 19:37:38.761903 | orchestrator | to this access issue: 2025-05-19 19:37:38.761923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-19 19:37:38.761943 | orchestrator | directory 2025-05-19 19:37:38.761962 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:37:38.761979 | orchestrator | 2025-05-19 19:37:38.761994 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-19 19:37:38.762012 | orchestrator | Monday 19 May 2025 19:36:18 +0000 (0:00:00.783) 0:01:04.762 ************ 2025-05-19 19:37:38.762162 | orchestrator | [WARNING]: Skipped 2025-05-19 19:37:38.762187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-19 19:37:38.762217 | orchestrator | to this access issue: 2025-05-19 19:37:38.762238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-19 19:37:38.762258 | orchestrator | directory 2025-05-19 19:37:38.762275 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:37:38.762293 | orchestrator | 2025-05-19 19:37:38.762310 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-19 19:37:38.762329 | orchestrator | Monday 19 May 2025 19:36:19 +0000 (0:00:00.687) 0:01:05.449 ************ 2025-05-19 19:37:38.762348 | orchestrator | [WARNING]: Skipped 2025-05-19 19:37:38.762367 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-19 19:37:38.762385 | orchestrator | to this access issue: 2025-05-19 19:37:38.762403 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-19 19:37:38.762422 | orchestrator | directory 2025-05-19 19:37:38.762442 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:37:38.762461 | orchestrator | 2025-05-19 19:37:38.762474 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-19 19:37:38.762486 | orchestrator | Monday 19 May 2025 19:36:19 +0000 (0:00:00.666) 0:01:06.115 ************ 2025-05-19 19:37:38.762509 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.762520 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.762531 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.762541 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.762552 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.762563 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.762573 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.762584 | orchestrator | 2025-05-19 19:37:38.762594 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-19 19:37:38.762605 | orchestrator | Monday 19 May 2025 19:36:24 +0000 (0:00:04.705) 0:01:10.821 ************ 2025-05-19 19:37:38.762615 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762631 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762650 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762687 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762706 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762725 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 19:37:38.762744 | orchestrator | 2025-05-19 19:37:38.762762 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-19 19:37:38.762777 | orchestrator | Monday 19 May 2025 19:36:27 +0000 (0:00:02.533) 0:01:13.354 ************ 2025-05-19 19:37:38.762788 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.762799 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.762810 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.762821 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.762832 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.762856 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.762867 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.762882 | orchestrator | 2025-05-19 19:37:38.762900 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-19 19:37:38.762918 | orchestrator | Monday 19 May 2025 19:36:29 +0000 (0:00:02.152) 0:01:15.506 ************ 2025-05-19 19:37:38.762937 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.762957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.762985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.763017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.763037 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.763086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.763150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.763172 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.763192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.763212 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.763255 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.763277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.763297 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.763317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.764547 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764592 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764602 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:37:38.764635 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764643 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764652 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764660 | orchestrator | 2025-05-19 19:37:38.764669 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-19 19:37:38.764678 | orchestrator | Monday 19 May 2025 19:36:31 +0000 (0:00:01.892) 0:01:17.399 ************ 2025-05-19 19:37:38.764686 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764694 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764710 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764717 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764725 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764733 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 19:37:38.764740 | orchestrator | 2025-05-19 19:37:38.764748 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-19 19:37:38.764769 | orchestrator | Monday 19 May 2025 19:36:34 +0000 (0:00:02.771) 0:01:20.171 ************ 2025-05-19 19:37:38.764777 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764785 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764793 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764801 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764809 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764821 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764829 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 19:37:38.764837 | orchestrator | 2025-05-19 19:37:38.764845 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-19 19:37:38.764852 | orchestrator | Monday 19 May 2025 19:36:36 +0000 (0:00:02.244) 0:01:22.416 ************ 2025-05-19 19:37:38.764861 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764889 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764933 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.764983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 19:37:38.764997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765036 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:37:38.765073 | orchestrator | 2025-05-19 19:37:38.765081 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-19 19:37:38.765089 | orchestrator | Monday 19 May 2025 19:36:39 +0000 (0:00:03.605) 0:01:26.021 ************ 2025-05-19 19:37:38.765097 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.765142 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.765158 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.765167 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.765175 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.765184 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.765192 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.765201 | orchestrator | 2025-05-19 19:37:38.765210 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-19 19:37:38.765219 | orchestrator | Monday 19 May 2025 19:36:41 +0000 (0:00:01.789) 0:01:27.811 ************ 2025-05-19 19:37:38.765228 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.765237 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.765245 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.765254 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.765262 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.765271 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.765280 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.765289 | orchestrator | 2025-05-19 19:37:38.765298 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765307 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:01.477) 0:01:29.288 ************ 2025-05-19 19:37:38.765316 | orchestrator | 2025-05-19 19:37:38.765325 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765334 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.058) 0:01:29.346 ************ 2025-05-19 19:37:38.765343 | orchestrator | 2025-05-19 19:37:38.765352 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765361 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.052) 0:01:29.398 ************ 2025-05-19 19:37:38.765370 | orchestrator | 2025-05-19 19:37:38.765379 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765388 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.054) 0:01:29.453 ************ 2025-05-19 19:37:38.765397 | orchestrator | 2025-05-19 19:37:38.765405 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765414 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.258) 0:01:29.711 ************ 2025-05-19 19:37:38.765422 | orchestrator | 2025-05-19 19:37:38.765431 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765440 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.051) 0:01:29.763 ************ 2025-05-19 19:37:38.765449 | orchestrator | 2025-05-19 19:37:38.765457 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 19:37:38.765470 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.052) 0:01:29.815 ************ 2025-05-19 19:37:38.765479 | orchestrator | 2025-05-19 19:37:38.765489 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-19 19:37:38.765498 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.068) 0:01:29.884 ************ 2025-05-19 19:37:38.765506 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.765534 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.765543 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.765551 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.765558 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.765566 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.765574 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.765581 | orchestrator | 2025-05-19 19:37:38.765589 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-19 19:37:38.765606 | orchestrator | Monday 19 May 2025 19:36:52 +0000 (0:00:09.275) 0:01:39.160 ************ 2025-05-19 19:37:38.765619 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.765631 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.765643 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.765655 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.765668 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.765679 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.765690 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.765701 | orchestrator | 2025-05-19 19:37:38.765713 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-19 19:37:38.765726 | orchestrator | Monday 19 May 2025 19:37:22 +0000 (0:00:29.476) 0:02:08.636 ************ 2025-05-19 19:37:38.765739 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:37:38.765754 | orchestrator | ok: [testbed-manager] 2025-05-19 19:37:38.765769 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:37:38.765782 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:37:38.765795 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:37:38.765803 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:37:38.765810 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:37:38.765818 | orchestrator | 2025-05-19 19:37:38.765826 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-19 19:37:38.765834 | orchestrator | Monday 19 May 2025 19:37:25 +0000 (0:00:02.643) 0:02:11.280 ************ 2025-05-19 19:37:38.765841 | orchestrator | changed: [testbed-manager] 2025-05-19 19:37:38.765849 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:37:38.765857 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:37:38.765864 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:37:38.765872 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:37:38.765880 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:37:38.765887 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:37:38.765895 | orchestrator | 2025-05-19 19:37:38.765903 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:37:38.765911 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765921 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765929 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765943 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765951 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765959 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765967 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:37:38.765975 | orchestrator | 2025-05-19 19:37:38.765983 | orchestrator | 2025-05-19 19:37:38.765990 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:37:38.765998 | orchestrator | Monday 19 May 2025 19:37:35 +0000 (0:00:10.857) 0:02:22.137 ************ 2025-05-19 19:37:38.766006 | orchestrator | =============================================================================== 2025-05-19 19:37:38.766014 | orchestrator | common : Ensure fluentd image is present for label check --------------- 30.21s 2025-05-19 19:37:38.766078 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 29.48s 2025-05-19 19:37:38.766087 | orchestrator | common : Restart cron container ---------------------------------------- 10.86s 2025-05-19 19:37:38.766101 | orchestrator | common : Restart fluentd container -------------------------------------- 9.28s 2025-05-19 19:37:38.766134 | orchestrator | common : Copying over config.json files for services -------------------- 5.64s 2025-05-19 19:37:38.766145 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.64s 2025-05-19 19:37:38.766156 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.71s 2025-05-19 19:37:38.766170 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.76s 2025-05-19 19:37:38.766189 | orchestrator | common : Check common containers ---------------------------------------- 3.61s 2025-05-19 19:37:38.766201 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 3.12s 2025-05-19 19:37:38.766213 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.81s 2025-05-19 19:37:38.766231 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.77s 2025-05-19 19:37:38.766244 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.64s 2025-05-19 19:37:38.766256 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.53s 2025-05-19 19:37:38.766267 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.24s 2025-05-19 19:37:38.766283 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.15s 2025-05-19 19:37:38.766295 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.14s 2025-05-19 19:37:38.766308 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.89s 2025-05-19 19:37:38.766319 | orchestrator | common : include_tasks -------------------------------------------------- 1.89s 2025-05-19 19:37:38.766332 | orchestrator | common : Creating log volume -------------------------------------------- 1.79s 2025-05-19 19:37:38.766350 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:38.766363 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:38.766453 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:38.766546 | orchestrator | 2025-05-19 19:37:38 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:38.766564 | orchestrator | 2025-05-19 19:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:41.822642 | orchestrator | 2025-05-19 19:37:41 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:41.822898 | orchestrator | 2025-05-19 19:37:41 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:41.823892 | orchestrator | 2025-05-19 19:37:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:41.824606 | orchestrator | 2025-05-19 19:37:41 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:41.825356 | orchestrator | 2025-05-19 19:37:41 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:41.825924 | orchestrator | 2025-05-19 19:37:41 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:41.826127 | orchestrator | 2025-05-19 19:37:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:44.866475 | orchestrator | 2025-05-19 19:37:44 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:44.866602 | orchestrator | 2025-05-19 19:37:44 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:44.867386 | orchestrator | 2025-05-19 19:37:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:44.868527 | orchestrator | 2025-05-19 19:37:44 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:44.869381 | orchestrator | 2025-05-19 19:37:44 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:44.871132 | orchestrator | 2025-05-19 19:37:44 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:44.871223 | orchestrator | 2025-05-19 19:37:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:47.904154 | orchestrator | 2025-05-19 19:37:47 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:47.905635 | orchestrator | 2025-05-19 19:37:47 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:47.906541 | orchestrator | 2025-05-19 19:37:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:47.907131 | orchestrator | 2025-05-19 19:37:47 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:47.907912 | orchestrator | 2025-05-19 19:37:47 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:47.909055 | orchestrator | 2025-05-19 19:37:47 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:47.909681 | orchestrator | 2025-05-19 19:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:50.960204 | orchestrator | 2025-05-19 19:37:50 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:50.960387 | orchestrator | 2025-05-19 19:37:50 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:50.962836 | orchestrator | 2025-05-19 19:37:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:50.963319 | orchestrator | 2025-05-19 19:37:50 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:50.963787 | orchestrator | 2025-05-19 19:37:50 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:50.973893 | orchestrator | 2025-05-19 19:37:50 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:50.973987 | orchestrator | 2025-05-19 19:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:54.022837 | orchestrator | 2025-05-19 19:37:54 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:54.024921 | orchestrator | 2025-05-19 19:37:54 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:54.027438 | orchestrator | 2025-05-19 19:37:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:54.028569 | orchestrator | 2025-05-19 19:37:54 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:54.029409 | orchestrator | 2025-05-19 19:37:54 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:54.030394 | orchestrator | 2025-05-19 19:37:54 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:54.030422 | orchestrator | 2025-05-19 19:37:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:37:57.099036 | orchestrator | 2025-05-19 19:37:57 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:37:57.101386 | orchestrator | 2025-05-19 19:37:57 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:37:57.106860 | orchestrator | 2025-05-19 19:37:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:37:57.115084 | orchestrator | 2025-05-19 19:37:57 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:37:57.119868 | orchestrator | 2025-05-19 19:37:57 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:37:57.124769 | orchestrator | 2025-05-19 19:37:57 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:37:57.124932 | orchestrator | 2025-05-19 19:37:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:00.177446 | orchestrator | 2025-05-19 19:38:00 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state STARTED 2025-05-19 19:38:00.177560 | orchestrator | 2025-05-19 19:38:00 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:00.177569 | orchestrator | 2025-05-19 19:38:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:00.177579 | orchestrator | 2025-05-19 19:38:00 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:00.178384 | orchestrator | 2025-05-19 19:38:00 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:00.181571 | orchestrator | 2025-05-19 19:38:00 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:38:00.181675 | orchestrator | 2025-05-19 19:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:03.237291 | orchestrator | 2025-05-19 19:38:03 | INFO  | Task f99a6dda-77ba-4c5a-b090-1b946ececd03 is in state SUCCESS 2025-05-19 19:38:03.237481 | orchestrator | 2025-05-19 19:38:03 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:03.237964 | orchestrator | 2025-05-19 19:38:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:03.238855 | orchestrator | 2025-05-19 19:38:03 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:03.239322 | orchestrator | 2025-05-19 19:38:03 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:03.239770 | orchestrator | 2025-05-19 19:38:03 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:38:03.239804 | orchestrator | 2025-05-19 19:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:06.276494 | orchestrator | 2025-05-19 19:38:06 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:06.277408 | orchestrator | 2025-05-19 19:38:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:06.278265 | orchestrator | 2025-05-19 19:38:06 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:06.278603 | orchestrator | 2025-05-19 19:38:06 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:06.279761 | orchestrator | 2025-05-19 19:38:06 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state STARTED 2025-05-19 19:38:06.280566 | orchestrator | 2025-05-19 19:38:06 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:06.280850 | orchestrator | 2025-05-19 19:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:09.307080 | orchestrator | 2025-05-19 19:38:09 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:09.307233 | orchestrator | 2025-05-19 19:38:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:09.307453 | orchestrator | 2025-05-19 19:38:09 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:09.308014 | orchestrator | 2025-05-19 19:38:09 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:09.308631 | orchestrator | 2025-05-19 19:38:09 | INFO  | Task 16b8f212-7ab1-43a3-bc87-41fc36c4cda8 is in state SUCCESS 2025-05-19 19:38:09.309872 | orchestrator | 2025-05-19 19:38:09.309923 | orchestrator | 2025-05-19 19:38:09.309929 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:38:09.309935 | orchestrator | 2025-05-19 19:38:09.309940 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:38:09.309944 | orchestrator | Monday 19 May 2025 19:37:41 +0000 (0:00:00.292) 0:00:00.292 ************ 2025-05-19 19:38:09.309948 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:38:09.309954 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:38:09.309958 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:38:09.309962 | orchestrator | 2025-05-19 19:38:09.309966 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:38:09.309970 | orchestrator | Monday 19 May 2025 19:37:42 +0000 (0:00:00.746) 0:00:01.038 ************ 2025-05-19 19:38:09.309975 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-19 19:38:09.309979 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-19 19:38:09.309983 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-19 19:38:09.309987 | orchestrator | 2025-05-19 19:38:09.309990 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-19 19:38:09.309994 | orchestrator | 2025-05-19 19:38:09.309998 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-19 19:38:09.310002 | orchestrator | Monday 19 May 2025 19:37:42 +0000 (0:00:00.527) 0:00:01.566 ************ 2025-05-19 19:38:09.310006 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:38:09.310010 | orchestrator | 2025-05-19 19:38:09.310048 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-19 19:38:09.310053 | orchestrator | Monday 19 May 2025 19:37:43 +0000 (0:00:00.965) 0:00:02.532 ************ 2025-05-19 19:38:09.310057 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-19 19:38:09.310061 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-19 19:38:09.310066 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-19 19:38:09.310069 | orchestrator | 2025-05-19 19:38:09.310073 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-19 19:38:09.310077 | orchestrator | Monday 19 May 2025 19:37:45 +0000 (0:00:01.267) 0:00:03.800 ************ 2025-05-19 19:38:09.310081 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-19 19:38:09.310084 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-19 19:38:09.310088 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-19 19:38:09.310092 | orchestrator | 2025-05-19 19:38:09.310096 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-19 19:38:09.310116 | orchestrator | Monday 19 May 2025 19:37:47 +0000 (0:00:02.432) 0:00:06.232 ************ 2025-05-19 19:38:09.310120 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:38:09.310124 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:38:09.310128 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:38:09.310132 | orchestrator | 2025-05-19 19:38:09.310136 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-19 19:38:09.310140 | orchestrator | Monday 19 May 2025 19:37:50 +0000 (0:00:02.763) 0:00:08.996 ************ 2025-05-19 19:38:09.310143 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:38:09.310147 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:38:09.310151 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:38:09.310155 | orchestrator | 2025-05-19 19:38:09.310159 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:38:09.310163 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:38:09.310181 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:38:09.310185 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:38:09.310189 | orchestrator | 2025-05-19 19:38:09.310192 | orchestrator | 2025-05-19 19:38:09.310196 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:38:09.310200 | orchestrator | Monday 19 May 2025 19:37:59 +0000 (0:00:09.283) 0:00:18.279 ************ 2025-05-19 19:38:09.310204 | orchestrator | =============================================================================== 2025-05-19 19:38:09.310216 | orchestrator | memcached : Restart memcached container --------------------------------- 9.28s 2025-05-19 19:38:09.310220 | orchestrator | memcached : Check memcached container ----------------------------------- 2.76s 2025-05-19 19:38:09.310224 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.43s 2025-05-19 19:38:09.310228 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.27s 2025-05-19 19:38:09.310232 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.97s 2025-05-19 19:38:09.310236 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2025-05-19 19:38:09.310239 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2025-05-19 19:38:09.310243 | orchestrator | 2025-05-19 19:38:09.310247 | orchestrator | 2025-05-19 19:38:09.310251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:38:09.310255 | orchestrator | 2025-05-19 19:38:09.310258 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:38:09.310262 | orchestrator | Monday 19 May 2025 19:37:41 +0000 (0:00:00.622) 0:00:00.622 ************ 2025-05-19 19:38:09.310266 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:38:09.310270 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:38:09.310274 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:38:09.310277 | orchestrator | 2025-05-19 19:38:09.310281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:38:09.310295 | orchestrator | Monday 19 May 2025 19:37:41 +0000 (0:00:00.637) 0:00:01.259 ************ 2025-05-19 19:38:09.310299 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-19 19:38:09.310304 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-19 19:38:09.310307 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-19 19:38:09.310311 | orchestrator | 2025-05-19 19:38:09.310315 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-19 19:38:09.310319 | orchestrator | 2025-05-19 19:38:09.310323 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-19 19:38:09.310326 | orchestrator | Monday 19 May 2025 19:37:42 +0000 (0:00:00.312) 0:00:01.571 ************ 2025-05-19 19:38:09.310330 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:38:09.310334 | orchestrator | 2025-05-19 19:38:09.310338 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-19 19:38:09.310342 | orchestrator | Monday 19 May 2025 19:37:42 +0000 (0:00:00.779) 0:00:02.350 ************ 2025-05-19 19:38:09.310349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310390 | orchestrator | 2025-05-19 19:38:09.310394 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-19 19:38:09.310398 | orchestrator | Monday 19 May 2025 19:37:44 +0000 (0:00:01.525) 0:00:03.876 ************ 2025-05-19 19:38:09.310409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310452 | orchestrator | 2025-05-19 19:38:09.310456 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-19 19:38:09.310460 | orchestrator | Monday 19 May 2025 19:37:47 +0000 (0:00:03.488) 0:00:07.364 ************ 2025-05-19 19:38:09.310464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310503 | orchestrator | 2025-05-19 19:38:09.310507 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-19 19:38:09.310512 | orchestrator | Monday 19 May 2025 19:37:51 +0000 (0:00:03.649) 0:00:11.014 ************ 2025-05-19 19:38:09.310516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 19:38:09.310550 | orchestrator | 2025-05-19 19:38:09.310554 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 19:38:09.310559 | orchestrator | Monday 19 May 2025 19:37:54 +0000 (0:00:02.776) 0:00:13.790 ************ 2025-05-19 19:38:09.310566 | orchestrator | 2025-05-19 19:38:09.310571 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 19:38:09.310575 | orchestrator | Monday 19 May 2025 19:37:54 +0000 (0:00:00.097) 0:00:13.888 ************ 2025-05-19 19:38:09.310579 | orchestrator | 2025-05-19 19:38:09.310583 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 19:38:09.310592 | orchestrator | Monday 19 May 2025 19:37:54 +0000 (0:00:00.069) 0:00:13.957 ************ 2025-05-19 19:38:09.310596 | orchestrator | 2025-05-19 19:38:09.310601 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-19 19:38:09.310605 | orchestrator | Monday 19 May 2025 19:37:54 +0000 (0:00:00.084) 0:00:14.041 ************ 2025-05-19 19:38:09.310609 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:38:09.310614 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:38:09.310618 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:38:09.310622 | orchestrator | 2025-05-19 19:38:09.310627 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-19 19:38:09.310631 | orchestrator | Monday 19 May 2025 19:37:59 +0000 (0:00:05.119) 0:00:19.160 ************ 2025-05-19 19:38:09.310635 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:38:09.310640 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:38:09.310644 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:38:09.310648 | orchestrator | 2025-05-19 19:38:09.310653 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:38:09.310657 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:38:09.310662 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:38:09.310666 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:38:09.310671 | orchestrator | 2025-05-19 19:38:09.310675 | orchestrator | 2025-05-19 19:38:09.310679 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:38:09.310684 | orchestrator | Monday 19 May 2025 19:38:06 +0000 (0:00:06.863) 0:00:26.024 ************ 2025-05-19 19:38:09.310688 | orchestrator | =============================================================================== 2025-05-19 19:38:09.310693 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 6.86s 2025-05-19 19:38:09.310697 | orchestrator | redis : Restart redis container ----------------------------------------- 5.12s 2025-05-19 19:38:09.310701 | orchestrator | redis : Copying over redis config files --------------------------------- 3.65s 2025-05-19 19:38:09.310706 | orchestrator | redis : Copying over default config.json files -------------------------- 3.49s 2025-05-19 19:38:09.310710 | orchestrator | redis : Check redis containers ------------------------------------------ 2.78s 2025-05-19 19:38:09.310714 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.53s 2025-05-19 19:38:09.310719 | orchestrator | redis : include_tasks --------------------------------------------------- 0.78s 2025-05-19 19:38:09.310723 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.64s 2025-05-19 19:38:09.310727 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2025-05-19 19:38:09.310732 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.25s 2025-05-19 19:38:09.310775 | orchestrator | 2025-05-19 19:38:09 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:09.310781 | orchestrator | 2025-05-19 19:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:12.343594 | orchestrator | 2025-05-19 19:38:12 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:12.348614 | orchestrator | 2025-05-19 19:38:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:12.349493 | orchestrator | 2025-05-19 19:38:12 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:12.351607 | orchestrator | 2025-05-19 19:38:12 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:12.352436 | orchestrator | 2025-05-19 19:38:12 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:12.352535 | orchestrator | 2025-05-19 19:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:15.389867 | orchestrator | 2025-05-19 19:38:15 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:15.390755 | orchestrator | 2025-05-19 19:38:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:15.392502 | orchestrator | 2025-05-19 19:38:15 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:15.393644 | orchestrator | 2025-05-19 19:38:15 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:15.395469 | orchestrator | 2025-05-19 19:38:15 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:15.395527 | orchestrator | 2025-05-19 19:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:18.434931 | orchestrator | 2025-05-19 19:38:18 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:18.436364 | orchestrator | 2025-05-19 19:38:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:18.438555 | orchestrator | 2025-05-19 19:38:18 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:18.439734 | orchestrator | 2025-05-19 19:38:18 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:18.441676 | orchestrator | 2025-05-19 19:38:18 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:18.441852 | orchestrator | 2025-05-19 19:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:21.501476 | orchestrator | 2025-05-19 19:38:21 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:21.502937 | orchestrator | 2025-05-19 19:38:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:21.506293 | orchestrator | 2025-05-19 19:38:21 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:21.507665 | orchestrator | 2025-05-19 19:38:21 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:21.510274 | orchestrator | 2025-05-19 19:38:21 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:21.510460 | orchestrator | 2025-05-19 19:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:24.559337 | orchestrator | 2025-05-19 19:38:24 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:24.560434 | orchestrator | 2025-05-19 19:38:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:24.560495 | orchestrator | 2025-05-19 19:38:24 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:24.561026 | orchestrator | 2025-05-19 19:38:24 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:24.562673 | orchestrator | 2025-05-19 19:38:24 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:24.562727 | orchestrator | 2025-05-19 19:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:27.608175 | orchestrator | 2025-05-19 19:38:27 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:27.608378 | orchestrator | 2025-05-19 19:38:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:27.609188 | orchestrator | 2025-05-19 19:38:27 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:27.609780 | orchestrator | 2025-05-19 19:38:27 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:27.610529 | orchestrator | 2025-05-19 19:38:27 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:27.610553 | orchestrator | 2025-05-19 19:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:30.652045 | orchestrator | 2025-05-19 19:38:30 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:30.652374 | orchestrator | 2025-05-19 19:38:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:30.653479 | orchestrator | 2025-05-19 19:38:30 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:30.654786 | orchestrator | 2025-05-19 19:38:30 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:30.655634 | orchestrator | 2025-05-19 19:38:30 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:30.655667 | orchestrator | 2025-05-19 19:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:33.695465 | orchestrator | 2025-05-19 19:38:33 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:33.695792 | orchestrator | 2025-05-19 19:38:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:33.696550 | orchestrator | 2025-05-19 19:38:33 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:33.697313 | orchestrator | 2025-05-19 19:38:33 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:33.698218 | orchestrator | 2025-05-19 19:38:33 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:33.698384 | orchestrator | 2025-05-19 19:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:36.747370 | orchestrator | 2025-05-19 19:38:36 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:36.747607 | orchestrator | 2025-05-19 19:38:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:36.749354 | orchestrator | 2025-05-19 19:38:36 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:36.751153 | orchestrator | 2025-05-19 19:38:36 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:36.752920 | orchestrator | 2025-05-19 19:38:36 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:36.752954 | orchestrator | 2025-05-19 19:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:39.796374 | orchestrator | 2025-05-19 19:38:39 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:39.800587 | orchestrator | 2025-05-19 19:38:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:39.801126 | orchestrator | 2025-05-19 19:38:39 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:39.802227 | orchestrator | 2025-05-19 19:38:39 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:39.803780 | orchestrator | 2025-05-19 19:38:39 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:39.803818 | orchestrator | 2025-05-19 19:38:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:42.852072 | orchestrator | 2025-05-19 19:38:42 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:42.854898 | orchestrator | 2025-05-19 19:38:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:42.857396 | orchestrator | 2025-05-19 19:38:42 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:42.861088 | orchestrator | 2025-05-19 19:38:42 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:42.864757 | orchestrator | 2025-05-19 19:38:42 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:42.865159 | orchestrator | 2025-05-19 19:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:45.925959 | orchestrator | 2025-05-19 19:38:45 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:45.927661 | orchestrator | 2025-05-19 19:38:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:45.928913 | orchestrator | 2025-05-19 19:38:45 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:45.929907 | orchestrator | 2025-05-19 19:38:45 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:45.931199 | orchestrator | 2025-05-19 19:38:45 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:45.931237 | orchestrator | 2025-05-19 19:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:48.977010 | orchestrator | 2025-05-19 19:38:48 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:48.977721 | orchestrator | 2025-05-19 19:38:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:48.979312 | orchestrator | 2025-05-19 19:38:48 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:48.979903 | orchestrator | 2025-05-19 19:38:48 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:48.981126 | orchestrator | 2025-05-19 19:38:48 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:48.981157 | orchestrator | 2025-05-19 19:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:52.030868 | orchestrator | 2025-05-19 19:38:52 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:52.031489 | orchestrator | 2025-05-19 19:38:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:52.032902 | orchestrator | 2025-05-19 19:38:52 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:52.033932 | orchestrator | 2025-05-19 19:38:52 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:52.034967 | orchestrator | 2025-05-19 19:38:52 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:52.035001 | orchestrator | 2025-05-19 19:38:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:55.083641 | orchestrator | 2025-05-19 19:38:55 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:55.084996 | orchestrator | 2025-05-19 19:38:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:55.086794 | orchestrator | 2025-05-19 19:38:55 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:55.088547 | orchestrator | 2025-05-19 19:38:55 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:55.090864 | orchestrator | 2025-05-19 19:38:55 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:55.090914 | orchestrator | 2025-05-19 19:38:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:38:58.145268 | orchestrator | 2025-05-19 19:38:58 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:38:58.146483 | orchestrator | 2025-05-19 19:38:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:38:58.148197 | orchestrator | 2025-05-19 19:38:58 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:38:58.149336 | orchestrator | 2025-05-19 19:38:58 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state STARTED 2025-05-19 19:38:58.150803 | orchestrator | 2025-05-19 19:38:58 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:38:58.151137 | orchestrator | 2025-05-19 19:38:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:01.205930 | orchestrator | 2025-05-19 19:39:01 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:01.206178 | orchestrator | 2025-05-19 19:39:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:01.207977 | orchestrator | 2025-05-19 19:39:01 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:01.209848 | orchestrator | 2025-05-19 19:39:01 | INFO  | Task 3415b310-186d-4044-8241-fb266085dc9e is in state SUCCESS 2025-05-19 19:39:01.213333 | orchestrator | 2025-05-19 19:39:01.213376 | orchestrator | 2025-05-19 19:39:01.213389 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:39:01.213403 | orchestrator | 2025-05-19 19:39:01.213414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:39:01.213426 | orchestrator | Monday 19 May 2025 19:37:40 +0000 (0:00:00.589) 0:00:00.589 ************ 2025-05-19 19:39:01.213438 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:39:01.213451 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:39:01.213462 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:39:01.213473 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:39:01.213484 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:39:01.213498 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:39:01.213509 | orchestrator | 2025-05-19 19:39:01.213519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:39:01.213531 | orchestrator | Monday 19 May 2025 19:37:41 +0000 (0:00:00.848) 0:00:01.437 ************ 2025-05-19 19:39:01.213543 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 19:39:01.213555 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 19:39:01.213574 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 19:39:01.213586 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 19:39:01.213597 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 19:39:01.213609 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 19:39:01.213620 | orchestrator | 2025-05-19 19:39:01.213632 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-19 19:39:01.213643 | orchestrator | 2025-05-19 19:39:01.213653 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-19 19:39:01.213665 | orchestrator | Monday 19 May 2025 19:37:42 +0000 (0:00:01.009) 0:00:02.447 ************ 2025-05-19 19:39:01.213697 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:39:01.213710 | orchestrator | 2025-05-19 19:39:01.213720 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 19:39:01.213731 | orchestrator | Monday 19 May 2025 19:37:44 +0000 (0:00:01.754) 0:00:04.201 ************ 2025-05-19 19:39:01.213740 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-19 19:39:01.213751 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-19 19:39:01.213762 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-19 19:39:01.213772 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-19 19:39:01.213782 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-19 19:39:01.213793 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-19 19:39:01.213803 | orchestrator | 2025-05-19 19:39:01.213814 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 19:39:01.213824 | orchestrator | Monday 19 May 2025 19:37:46 +0000 (0:00:02.141) 0:00:06.343 ************ 2025-05-19 19:39:01.213835 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-19 19:39:01.213845 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-19 19:39:01.213855 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-19 19:39:01.213866 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-19 19:39:01.213877 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-19 19:39:01.213887 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-19 19:39:01.213897 | orchestrator | 2025-05-19 19:39:01.213908 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 19:39:01.213918 | orchestrator | Monday 19 May 2025 19:37:49 +0000 (0:00:02.471) 0:00:08.815 ************ 2025-05-19 19:39:01.213928 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-19 19:39:01.213939 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:39:01.213950 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-19 19:39:01.213960 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:39:01.213971 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-19 19:39:01.213981 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:39:01.213991 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-19 19:39:01.214001 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:39:01.214011 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-19 19:39:01.214081 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:39:01.214109 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-19 19:39:01.214120 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:39:01.214130 | orchestrator | 2025-05-19 19:39:01.214140 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-19 19:39:01.214150 | orchestrator | Monday 19 May 2025 19:37:51 +0000 (0:00:02.211) 0:00:11.027 ************ 2025-05-19 19:39:01.214161 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:39:01.214171 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:39:01.214181 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:39:01.214192 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:39:01.214202 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:39:01.214212 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:39:01.214222 | orchestrator | 2025-05-19 19:39:01.214233 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-19 19:39:01.214243 | orchestrator | Monday 19 May 2025 19:37:52 +0000 (0:00:01.385) 0:00:12.412 ************ 2025-05-19 19:39:01.214270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214450 | orchestrator | 2025-05-19 19:39:01.214461 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-19 19:39:01.214472 | orchestrator | Monday 19 May 2025 19:37:55 +0000 (0:00:02.476) 0:00:14.888 ************ 2025-05-19 19:39:01.214487 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214645 | orchestrator | 2025-05-19 19:39:01.214659 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-19 19:39:01.214669 | orchestrator | Monday 19 May 2025 19:37:58 +0000 (0:00:03.417) 0:00:18.306 ************ 2025-05-19 19:39:01.214680 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:39:01.214690 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:39:01.214700 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:39:01.214710 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:39:01.214720 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:39:01.214730 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:39:01.214740 | orchestrator | 2025-05-19 19:39:01.214751 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-19 19:39:01.214761 | orchestrator | Monday 19 May 2025 19:38:03 +0000 (0:00:04.597) 0:00:22.903 ************ 2025-05-19 19:39:01.214772 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:39:01.214782 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:39:01.214792 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:39:01.214802 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:39:01.214813 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:39:01.214823 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:39:01.214833 | orchestrator | 2025-05-19 19:39:01.214844 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-19 19:39:01.214854 | orchestrator | Monday 19 May 2025 19:38:05 +0000 (0:00:02.448) 0:00:25.351 ************ 2025-05-19 19:39:01.214865 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:39:01.214875 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:39:01.214886 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:39:01.214896 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:39:01.214906 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:39:01.214917 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:39:01.214927 | orchestrator | 2025-05-19 19:39:01.214937 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-19 19:39:01.214947 | orchestrator | Monday 19 May 2025 19:38:07 +0000 (0:00:01.475) 0:00:26.827 ************ 2025-05-19 19:39:01.214957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214974 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.214991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215080 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 19:39:01.215169 | orchestrator | 2025-05-19 19:39:01.215182 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 19:39:01.215192 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:02.352) 0:00:29.179 ************ 2025-05-19 19:39:01.215202 | orchestrator | 2025-05-19 19:39:01.215213 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 19:39:01.215223 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:00.127) 0:00:29.307 ************ 2025-05-19 19:39:01.215234 | orchestrator | 2025-05-19 19:39:01.215244 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 19:39:01.215255 | orchestrator | Monday 19 May 2025 19:38:10 +0000 (0:00:00.631) 0:00:29.938 ************ 2025-05-19 19:39:01.215266 | orchestrator | 2025-05-19 19:39:01.215276 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 19:39:01.215287 | orchestrator | Monday 19 May 2025 19:38:10 +0000 (0:00:00.191) 0:00:30.130 ************ 2025-05-19 19:39:01.215297 | orchestrator | 2025-05-19 19:39:01.215307 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 19:39:01.215318 | orchestrator | Monday 19 May 2025 19:38:10 +0000 (0:00:00.337) 0:00:30.468 ************ 2025-05-19 19:39:01.215328 | orchestrator | 2025-05-19 19:39:01.215338 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 19:39:01.215349 | orchestrator | Monday 19 May 2025 19:38:10 +0000 (0:00:00.108) 0:00:30.576 ************ 2025-05-19 19:39:01.215360 | orchestrator | 2025-05-19 19:39:01.215370 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-19 19:39:01.215381 | orchestrator | Monday 19 May 2025 19:38:11 +0000 (0:00:00.307) 0:00:30.883 ************ 2025-05-19 19:39:01.215391 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:39:01.215402 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:39:01.215412 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:39:01.215422 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:39:01.215433 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:39:01.215443 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:39:01.215454 | orchestrator | 2025-05-19 19:39:01.215464 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-19 19:39:01.215475 | orchestrator | Monday 19 May 2025 19:38:23 +0000 (0:00:12.303) 0:00:43.187 ************ 2025-05-19 19:39:01.215490 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:39:01.215501 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:39:01.215511 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:39:01.215521 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:39:01.215532 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:39:01.215542 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:39:01.215552 | orchestrator | 2025-05-19 19:39:01.215563 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-19 19:39:01.215573 | orchestrator | Monday 19 May 2025 19:38:25 +0000 (0:00:02.223) 0:00:45.411 ************ 2025-05-19 19:39:01.215583 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:39:01.215594 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:39:01.215604 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:39:01.215615 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:39:01.215625 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:39:01.215635 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:39:01.215646 | orchestrator | 2025-05-19 19:39:01.215657 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-19 19:39:01.215667 | orchestrator | Monday 19 May 2025 19:38:35 +0000 (0:00:10.114) 0:00:55.526 ************ 2025-05-19 19:39:01.215688 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-19 19:39:01.215698 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-19 19:39:01.215708 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-19 19:39:01.215718 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-19 19:39:01.215727 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-19 19:39:01.215736 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-19 19:39:01.215746 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-19 19:39:01.215755 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-19 19:39:01.215765 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-19 19:39:01.215776 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-19 19:39:01.215787 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-19 19:39:01.215797 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-19 19:39:01.215808 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 19:39:01.215818 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 19:39:01.215828 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 19:39:01.215839 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 19:39:01.215849 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 19:39:01.215860 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 19:39:01.215870 | orchestrator | 2025-05-19 19:39:01.215880 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-19 19:39:01.215891 | orchestrator | Monday 19 May 2025 19:38:44 +0000 (0:00:08.592) 0:01:04.118 ************ 2025-05-19 19:39:01.215902 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-19 19:39:01.215912 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:39:01.215922 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-19 19:39:01.215933 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:39:01.215945 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-19 19:39:01.215956 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:39:01.215966 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-19 19:39:01.215977 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-19 19:39:01.215987 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-19 19:39:01.215998 | orchestrator | 2025-05-19 19:39:01.216009 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-19 19:39:01.216019 | orchestrator | Monday 19 May 2025 19:38:47 +0000 (0:00:03.463) 0:01:07.581 ************ 2025-05-19 19:39:01.216031 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-19 19:39:01.216041 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:39:01.216051 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-19 19:39:01.216068 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:39:01.216079 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-19 19:39:01.216131 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:39:01.216144 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-19 19:39:01.216161 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-19 19:39:01.216172 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-19 19:39:01.216182 | orchestrator | 2025-05-19 19:39:01.216193 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-19 19:39:01.216203 | orchestrator | Monday 19 May 2025 19:38:52 +0000 (0:00:04.402) 0:01:11.984 ************ 2025-05-19 19:39:01.216214 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:39:01.216223 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:39:01.216233 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:39:01.216244 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:39:01.216254 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:39:01.216264 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:39:01.216275 | orchestrator | 2025-05-19 19:39:01.216285 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:39:01.216296 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:39:01.216314 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:39:01.216325 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:39:01.216336 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:39:01.216347 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:39:01.216357 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:39:01.216368 | orchestrator | 2025-05-19 19:39:01.216378 | orchestrator | 2025-05-19 19:39:01.216389 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:39:01.216400 | orchestrator | Monday 19 May 2025 19:39:00 +0000 (0:00:07.815) 0:01:19.800 ************ 2025-05-19 19:39:01.216410 | orchestrator | =============================================================================== 2025-05-19 19:39:01.216420 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.93s 2025-05-19 19:39:01.216431 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.30s 2025-05-19 19:39:01.216442 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.59s 2025-05-19 19:39:01.216452 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 4.60s 2025-05-19 19:39:01.216462 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.40s 2025-05-19 19:39:01.216473 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.46s 2025-05-19 19:39:01.216483 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.42s 2025-05-19 19:39:01.216494 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.48s 2025-05-19 19:39:01.216505 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.47s 2025-05-19 19:39:01.216515 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.45s 2025-05-19 19:39:01.216526 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.35s 2025-05-19 19:39:01.216543 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.22s 2025-05-19 19:39:01.216554 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.21s 2025-05-19 19:39:01.216564 | orchestrator | module-load : Load modules ---------------------------------------------- 2.14s 2025-05-19 19:39:01.216574 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.75s 2025-05-19 19:39:01.216585 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.70s 2025-05-19 19:39:01.216595 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.48s 2025-05-19 19:39:01.216606 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.39s 2025-05-19 19:39:01.216616 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2025-05-19 19:39:01.216626 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2025-05-19 19:39:01.216636 | orchestrator | 2025-05-19 19:39:01 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:01.216647 | orchestrator | 2025-05-19 19:39:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:04.257471 | orchestrator | 2025-05-19 19:39:04 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:04.259007 | orchestrator | 2025-05-19 19:39:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:04.260870 | orchestrator | 2025-05-19 19:39:04 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:04.265230 | orchestrator | 2025-05-19 19:39:04 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:04.266668 | orchestrator | 2025-05-19 19:39:04 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:04.266724 | orchestrator | 2025-05-19 19:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:07.317289 | orchestrator | 2025-05-19 19:39:07 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:07.317479 | orchestrator | 2025-05-19 19:39:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:07.318144 | orchestrator | 2025-05-19 19:39:07 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:07.320965 | orchestrator | 2025-05-19 19:39:07 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:07.321033 | orchestrator | 2025-05-19 19:39:07 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:07.321721 | orchestrator | 2025-05-19 19:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:10.371744 | orchestrator | 2025-05-19 19:39:10 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:10.372376 | orchestrator | 2025-05-19 19:39:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:10.372410 | orchestrator | 2025-05-19 19:39:10 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:10.372890 | orchestrator | 2025-05-19 19:39:10 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:10.373275 | orchestrator | 2025-05-19 19:39:10 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:10.373333 | orchestrator | 2025-05-19 19:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:13.415826 | orchestrator | 2025-05-19 19:39:13 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:13.416016 | orchestrator | 2025-05-19 19:39:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:13.416386 | orchestrator | 2025-05-19 19:39:13 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:13.417295 | orchestrator | 2025-05-19 19:39:13 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:13.418085 | orchestrator | 2025-05-19 19:39:13 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:13.418124 | orchestrator | 2025-05-19 19:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:16.462469 | orchestrator | 2025-05-19 19:39:16 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:16.465169 | orchestrator | 2025-05-19 19:39:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:16.465228 | orchestrator | 2025-05-19 19:39:16 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:16.465241 | orchestrator | 2025-05-19 19:39:16 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:16.465252 | orchestrator | 2025-05-19 19:39:16 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:16.465264 | orchestrator | 2025-05-19 19:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:19.525828 | orchestrator | 2025-05-19 19:39:19 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:19.526824 | orchestrator | 2025-05-19 19:39:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:19.528759 | orchestrator | 2025-05-19 19:39:19 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:19.531269 | orchestrator | 2025-05-19 19:39:19 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:19.533146 | orchestrator | 2025-05-19 19:39:19 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:19.533246 | orchestrator | 2025-05-19 19:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:22.592563 | orchestrator | 2025-05-19 19:39:22 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:22.593261 | orchestrator | 2025-05-19 19:39:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:22.594043 | orchestrator | 2025-05-19 19:39:22 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:22.595245 | orchestrator | 2025-05-19 19:39:22 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:22.595724 | orchestrator | 2025-05-19 19:39:22 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:22.595746 | orchestrator | 2025-05-19 19:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:25.646914 | orchestrator | 2025-05-19 19:39:25 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:25.647021 | orchestrator | 2025-05-19 19:39:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:25.651235 | orchestrator | 2025-05-19 19:39:25 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:25.652256 | orchestrator | 2025-05-19 19:39:25 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:25.653148 | orchestrator | 2025-05-19 19:39:25 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:25.653372 | orchestrator | 2025-05-19 19:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:28.705031 | orchestrator | 2025-05-19 19:39:28 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:28.710129 | orchestrator | 2025-05-19 19:39:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:28.711308 | orchestrator | 2025-05-19 19:39:28 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:28.715361 | orchestrator | 2025-05-19 19:39:28 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:28.715406 | orchestrator | 2025-05-19 19:39:28 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:28.715415 | orchestrator | 2025-05-19 19:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:31.767831 | orchestrator | 2025-05-19 19:39:31 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:31.768330 | orchestrator | 2025-05-19 19:39:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:31.770882 | orchestrator | 2025-05-19 19:39:31 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:31.772018 | orchestrator | 2025-05-19 19:39:31 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:31.772928 | orchestrator | 2025-05-19 19:39:31 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:31.773253 | orchestrator | 2025-05-19 19:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:34.818578 | orchestrator | 2025-05-19 19:39:34 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:34.818787 | orchestrator | 2025-05-19 19:39:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:34.819306 | orchestrator | 2025-05-19 19:39:34 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:34.819959 | orchestrator | 2025-05-19 19:39:34 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:34.820577 | orchestrator | 2025-05-19 19:39:34 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:34.820603 | orchestrator | 2025-05-19 19:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:37.876538 | orchestrator | 2025-05-19 19:39:37 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:37.876620 | orchestrator | 2025-05-19 19:39:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:37.876740 | orchestrator | 2025-05-19 19:39:37 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:37.877530 | orchestrator | 2025-05-19 19:39:37 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:37.878342 | orchestrator | 2025-05-19 19:39:37 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:37.878397 | orchestrator | 2025-05-19 19:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:40.931847 | orchestrator | 2025-05-19 19:39:40 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:40.935032 | orchestrator | 2025-05-19 19:39:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:40.936819 | orchestrator | 2025-05-19 19:39:40 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:40.938437 | orchestrator | 2025-05-19 19:39:40 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:40.941035 | orchestrator | 2025-05-19 19:39:40 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:40.941948 | orchestrator | 2025-05-19 19:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:43.991022 | orchestrator | 2025-05-19 19:39:43 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:43.991229 | orchestrator | 2025-05-19 19:39:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:43.991878 | orchestrator | 2025-05-19 19:39:43 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:43.995352 | orchestrator | 2025-05-19 19:39:43 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:43.996155 | orchestrator | 2025-05-19 19:39:43 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:43.996186 | orchestrator | 2025-05-19 19:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:47.090279 | orchestrator | 2025-05-19 19:39:47 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:47.090773 | orchestrator | 2025-05-19 19:39:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:47.092965 | orchestrator | 2025-05-19 19:39:47 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:47.096878 | orchestrator | 2025-05-19 19:39:47 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:47.097921 | orchestrator | 2025-05-19 19:39:47 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:47.097972 | orchestrator | 2025-05-19 19:39:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:50.203495 | orchestrator | 2025-05-19 19:39:50 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:50.209571 | orchestrator | 2025-05-19 19:39:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:50.212412 | orchestrator | 2025-05-19 19:39:50 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:50.213910 | orchestrator | 2025-05-19 19:39:50 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:50.215382 | orchestrator | 2025-05-19 19:39:50 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:50.216430 | orchestrator | 2025-05-19 19:39:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:53.281684 | orchestrator | 2025-05-19 19:39:53 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:53.282380 | orchestrator | 2025-05-19 19:39:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:53.287877 | orchestrator | 2025-05-19 19:39:53 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:53.289100 | orchestrator | 2025-05-19 19:39:53 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:53.290267 | orchestrator | 2025-05-19 19:39:53 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:53.290305 | orchestrator | 2025-05-19 19:39:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:56.333806 | orchestrator | 2025-05-19 19:39:56 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:56.335047 | orchestrator | 2025-05-19 19:39:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:56.336339 | orchestrator | 2025-05-19 19:39:56 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:56.337003 | orchestrator | 2025-05-19 19:39:56 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:56.340178 | orchestrator | 2025-05-19 19:39:56 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:56.340272 | orchestrator | 2025-05-19 19:39:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:39:59.393588 | orchestrator | 2025-05-19 19:39:59 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:39:59.396403 | orchestrator | 2025-05-19 19:39:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:39:59.401465 | orchestrator | 2025-05-19 19:39:59 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:39:59.403612 | orchestrator | 2025-05-19 19:39:59 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:39:59.406743 | orchestrator | 2025-05-19 19:39:59 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:39:59.406826 | orchestrator | 2025-05-19 19:39:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:02.459637 | orchestrator | 2025-05-19 19:40:02 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:02.461312 | orchestrator | 2025-05-19 19:40:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:02.461695 | orchestrator | 2025-05-19 19:40:02 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:02.462498 | orchestrator | 2025-05-19 19:40:02 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:02.464814 | orchestrator | 2025-05-19 19:40:02 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:02.464873 | orchestrator | 2025-05-19 19:40:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:05.500340 | orchestrator | 2025-05-19 19:40:05 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:05.500735 | orchestrator | 2025-05-19 19:40:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:05.501461 | orchestrator | 2025-05-19 19:40:05 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:05.502273 | orchestrator | 2025-05-19 19:40:05 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:05.505205 | orchestrator | 2025-05-19 19:40:05 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:05.505263 | orchestrator | 2025-05-19 19:40:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:08.534589 | orchestrator | 2025-05-19 19:40:08 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:08.534781 | orchestrator | 2025-05-19 19:40:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:08.535396 | orchestrator | 2025-05-19 19:40:08 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:08.536754 | orchestrator | 2025-05-19 19:40:08 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:08.537486 | orchestrator | 2025-05-19 19:40:08 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:08.539203 | orchestrator | 2025-05-19 19:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:11.568170 | orchestrator | 2025-05-19 19:40:11 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:11.568446 | orchestrator | 2025-05-19 19:40:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:11.568814 | orchestrator | 2025-05-19 19:40:11 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:11.570768 | orchestrator | 2025-05-19 19:40:11 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:11.571034 | orchestrator | 2025-05-19 19:40:11 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:11.571179 | orchestrator | 2025-05-19 19:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:14.594230 | orchestrator | 2025-05-19 19:40:14 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:14.594363 | orchestrator | 2025-05-19 19:40:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:14.594515 | orchestrator | 2025-05-19 19:40:14 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:14.595390 | orchestrator | 2025-05-19 19:40:14 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:14.595628 | orchestrator | 2025-05-19 19:40:14 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:14.595654 | orchestrator | 2025-05-19 19:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:17.638304 | orchestrator | 2025-05-19 19:40:17 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:17.639804 | orchestrator | 2025-05-19 19:40:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:17.641887 | orchestrator | 2025-05-19 19:40:17 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:17.643650 | orchestrator | 2025-05-19 19:40:17 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:17.644901 | orchestrator | 2025-05-19 19:40:17 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:17.645027 | orchestrator | 2025-05-19 19:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:20.676210 | orchestrator | 2025-05-19 19:40:20 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:20.676692 | orchestrator | 2025-05-19 19:40:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:20.676747 | orchestrator | 2025-05-19 19:40:20 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:20.677155 | orchestrator | 2025-05-19 19:40:20 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:20.679214 | orchestrator | 2025-05-19 19:40:20 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:20.679327 | orchestrator | 2025-05-19 19:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:23.709139 | orchestrator | 2025-05-19 19:40:23 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:23.709401 | orchestrator | 2025-05-19 19:40:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:23.709637 | orchestrator | 2025-05-19 19:40:23 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:23.710875 | orchestrator | 2025-05-19 19:40:23 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:23.711348 | orchestrator | 2025-05-19 19:40:23 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:23.711451 | orchestrator | 2025-05-19 19:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:26.748207 | orchestrator | 2025-05-19 19:40:26 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:26.748308 | orchestrator | 2025-05-19 19:40:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:26.748318 | orchestrator | 2025-05-19 19:40:26 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:26.748327 | orchestrator | 2025-05-19 19:40:26 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:26.748335 | orchestrator | 2025-05-19 19:40:26 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:26.748344 | orchestrator | 2025-05-19 19:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:29.797372 | orchestrator | 2025-05-19 19:40:29 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:29.798083 | orchestrator | 2025-05-19 19:40:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:29.799366 | orchestrator | 2025-05-19 19:40:29 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:29.799410 | orchestrator | 2025-05-19 19:40:29 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:29.799423 | orchestrator | 2025-05-19 19:40:29 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:29.799432 | orchestrator | 2025-05-19 19:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:32.844895 | orchestrator | 2025-05-19 19:40:32 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:32.845382 | orchestrator | 2025-05-19 19:40:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:32.846345 | orchestrator | 2025-05-19 19:40:32 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:32.848489 | orchestrator | 2025-05-19 19:40:32 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:32.849217 | orchestrator | 2025-05-19 19:40:32 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state STARTED 2025-05-19 19:40:32.849333 | orchestrator | 2025-05-19 19:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:35.883615 | orchestrator | 2025-05-19 19:40:35 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:35.883693 | orchestrator | 2025-05-19 19:40:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:35.884145 | orchestrator | 2025-05-19 19:40:35 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:35.884763 | orchestrator | 2025-05-19 19:40:35 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:35.885357 | orchestrator | 2025-05-19 19:40:35 | INFO  | Task 02ba0a03-7158-4eb5-9c9e-6f1385c9b8f2 is in state SUCCESS 2025-05-19 19:40:35.885453 | orchestrator | 2025-05-19 19:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:35.886568 | orchestrator | 2025-05-19 19:40:35.886587 | orchestrator | 2025-05-19 19:40:35.886593 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-19 19:40:35.886598 | orchestrator | 2025-05-19 19:40:35.886603 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-19 19:40:35.886612 | orchestrator | Monday 19 May 2025 19:38:06 +0000 (0:00:00.375) 0:00:00.375 ************ 2025-05-19 19:40:35.886616 | orchestrator | ok: [localhost] => { 2025-05-19 19:40:35.886638 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-19 19:40:35.886643 | orchestrator | } 2025-05-19 19:40:35.886647 | orchestrator | 2025-05-19 19:40:35.886651 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-19 19:40:35.886656 | orchestrator | Monday 19 May 2025 19:38:06 +0000 (0:00:00.061) 0:00:00.437 ************ 2025-05-19 19:40:35.886661 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-19 19:40:35.886667 | orchestrator | ...ignoring 2025-05-19 19:40:35.886672 | orchestrator | 2025-05-19 19:40:35.886676 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-19 19:40:35.886680 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:02.516) 0:00:02.953 ************ 2025-05-19 19:40:35.886685 | orchestrator | skipping: [localhost] 2025-05-19 19:40:35.886689 | orchestrator | 2025-05-19 19:40:35.886693 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-19 19:40:35.886697 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:00.044) 0:00:02.998 ************ 2025-05-19 19:40:35.886702 | orchestrator | ok: [localhost] 2025-05-19 19:40:35.886706 | orchestrator | 2025-05-19 19:40:35.886710 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:40:35.886715 | orchestrator | 2025-05-19 19:40:35.886719 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:40:35.886723 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:00.133) 0:00:03.132 ************ 2025-05-19 19:40:35.886727 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:40:35.886731 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:40:35.886736 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:40:35.886740 | orchestrator | 2025-05-19 19:40:35.886744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:40:35.886749 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:00.415) 0:00:03.548 ************ 2025-05-19 19:40:35.886753 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-19 19:40:35.886758 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-19 19:40:35.886762 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-19 19:40:35.886766 | orchestrator | 2025-05-19 19:40:35.886770 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-19 19:40:35.886774 | orchestrator | 2025-05-19 19:40:35.886778 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 19:40:35.886781 | orchestrator | Monday 19 May 2025 19:38:10 +0000 (0:00:00.862) 0:00:04.410 ************ 2025-05-19 19:40:35.886785 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:40:35.886789 | orchestrator | 2025-05-19 19:40:35.886793 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-19 19:40:35.886797 | orchestrator | Monday 19 May 2025 19:38:12 +0000 (0:00:01.383) 0:00:05.794 ************ 2025-05-19 19:40:35.886800 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:40:35.886804 | orchestrator | 2025-05-19 19:40:35.886808 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-19 19:40:35.886811 | orchestrator | Monday 19 May 2025 19:38:13 +0000 (0:00:01.656) 0:00:07.451 ************ 2025-05-19 19:40:35.886815 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.886820 | orchestrator | 2025-05-19 19:40:35.886824 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-19 19:40:35.886827 | orchestrator | Monday 19 May 2025 19:38:14 +0000 (0:00:00.439) 0:00:07.890 ************ 2025-05-19 19:40:35.886831 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.886835 | orchestrator | 2025-05-19 19:40:35.886838 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-19 19:40:35.886842 | orchestrator | Monday 19 May 2025 19:38:15 +0000 (0:00:01.021) 0:00:08.911 ************ 2025-05-19 19:40:35.886849 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.886853 | orchestrator | 2025-05-19 19:40:35.886857 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-19 19:40:35.886861 | orchestrator | Monday 19 May 2025 19:38:15 +0000 (0:00:00.766) 0:00:09.678 ************ 2025-05-19 19:40:35.886864 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.886868 | orchestrator | 2025-05-19 19:40:35.886872 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 19:40:35.886875 | orchestrator | Monday 19 May 2025 19:38:16 +0000 (0:00:00.483) 0:00:10.162 ************ 2025-05-19 19:40:35.886879 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:40:35.886883 | orchestrator | 2025-05-19 19:40:35.886887 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-19 19:40:35.886891 | orchestrator | Monday 19 May 2025 19:38:17 +0000 (0:00:01.149) 0:00:11.311 ************ 2025-05-19 19:40:35.886895 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:40:35.886898 | orchestrator | 2025-05-19 19:40:35.886902 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-19 19:40:35.886906 | orchestrator | Monday 19 May 2025 19:38:18 +0000 (0:00:00.989) 0:00:12.301 ************ 2025-05-19 19:40:35.886909 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.886913 | orchestrator | 2025-05-19 19:40:35.886917 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-19 19:40:35.886921 | orchestrator | Monday 19 May 2025 19:38:18 +0000 (0:00:00.380) 0:00:12.681 ************ 2025-05-19 19:40:35.886925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.886928 | orchestrator | 2025-05-19 19:40:35.886937 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-19 19:40:35.886941 | orchestrator | Monday 19 May 2025 19:38:19 +0000 (0:00:00.384) 0:00:13.066 ************ 2025-05-19 19:40:35.886950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.886957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.886965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.886969 | orchestrator | 2025-05-19 19:40:35.886973 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-19 19:40:35.886977 | orchestrator | Monday 19 May 2025 19:38:20 +0000 (0:00:01.604) 0:00:14.670 ************ 2025-05-19 19:40:35.886989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.886994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.886998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.887005 | orchestrator | 2025-05-19 19:40:35.887009 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-19 19:40:35.887013 | orchestrator | Monday 19 May 2025 19:38:22 +0000 (0:00:02.005) 0:00:16.676 ************ 2025-05-19 19:40:35.887017 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 19:40:35.887021 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 19:40:35.887025 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 19:40:35.887029 | orchestrator | 2025-05-19 19:40:35.887032 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-19 19:40:35.887036 | orchestrator | Monday 19 May 2025 19:38:25 +0000 (0:00:02.516) 0:00:19.193 ************ 2025-05-19 19:40:35.887040 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 19:40:35.887043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 19:40:35.887047 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 19:40:35.887051 | orchestrator | 2025-05-19 19:40:35.887055 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-19 19:40:35.887058 | orchestrator | Monday 19 May 2025 19:38:29 +0000 (0:00:04.043) 0:00:23.236 ************ 2025-05-19 19:40:35.887062 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 19:40:35.887066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 19:40:35.887069 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 19:40:35.887073 | orchestrator | 2025-05-19 19:40:35.887079 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-19 19:40:35.887116 | orchestrator | Monday 19 May 2025 19:38:31 +0000 (0:00:01.836) 0:00:25.073 ************ 2025-05-19 19:40:35.887120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 19:40:35.887124 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 19:40:35.887128 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 19:40:35.887132 | orchestrator | 2025-05-19 19:40:35.887136 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-19 19:40:35.887140 | orchestrator | Monday 19 May 2025 19:38:33 +0000 (0:00:01.671) 0:00:26.744 ************ 2025-05-19 19:40:35.887143 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 19:40:35.887147 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 19:40:35.887151 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 19:40:35.887155 | orchestrator | 2025-05-19 19:40:35.887159 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-19 19:40:35.887162 | orchestrator | Monday 19 May 2025 19:38:34 +0000 (0:00:01.423) 0:00:28.167 ************ 2025-05-19 19:40:35.887169 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 19:40:35.887173 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 19:40:35.887177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 19:40:35.887181 | orchestrator | 2025-05-19 19:40:35.887185 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 19:40:35.887188 | orchestrator | Monday 19 May 2025 19:38:36 +0000 (0:00:01.909) 0:00:30.077 ************ 2025-05-19 19:40:35.887192 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.887196 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:40:35.887200 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:40:35.887203 | orchestrator | 2025-05-19 19:40:35.887207 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-19 19:40:35.887211 | orchestrator | Monday 19 May 2025 19:38:37 +0000 (0:00:01.307) 0:00:31.385 ************ 2025-05-19 19:40:35.887621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.887636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.887649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:40:35.887660 | orchestrator | 2025-05-19 19:40:35.887665 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-19 19:40:35.887669 | orchestrator | Monday 19 May 2025 19:38:39 +0000 (0:00:01.680) 0:00:33.065 ************ 2025-05-19 19:40:35.887673 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:40:35.887677 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:40:35.887682 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:40:35.887686 | orchestrator | 2025-05-19 19:40:35.887690 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-19 19:40:35.887694 | orchestrator | Monday 19 May 2025 19:38:40 +0000 (0:00:01.037) 0:00:34.103 ************ 2025-05-19 19:40:35.887698 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:40:35.887702 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:40:35.887706 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:40:35.887710 | orchestrator | 2025-05-19 19:40:35.887714 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-19 19:40:35.887721 | orchestrator | Monday 19 May 2025 19:38:50 +0000 (0:00:09.893) 0:00:43.996 ************ 2025-05-19 19:40:35.887725 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:40:35.887730 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:40:35.887734 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:40:35.887738 | orchestrator | 2025-05-19 19:40:35.887742 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 19:40:35.887746 | orchestrator | 2025-05-19 19:40:35.887750 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 19:40:35.887754 | orchestrator | Monday 19 May 2025 19:38:50 +0000 (0:00:00.589) 0:00:44.586 ************ 2025-05-19 19:40:35.887758 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:40:35.887763 | orchestrator | 2025-05-19 19:40:35.887767 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 19:40:35.887771 | orchestrator | Monday 19 May 2025 19:38:51 +0000 (0:00:00.580) 0:00:45.167 ************ 2025-05-19 19:40:35.887775 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:40:35.887779 | orchestrator | 2025-05-19 19:40:35.887783 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 19:40:35.887787 | orchestrator | Monday 19 May 2025 19:38:52 +0000 (0:00:00.764) 0:00:45.931 ************ 2025-05-19 19:40:35.887791 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:40:35.887796 | orchestrator | 2025-05-19 19:40:35.887799 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 19:40:35.887804 | orchestrator | Monday 19 May 2025 19:38:59 +0000 (0:00:07.652) 0:00:53.583 ************ 2025-05-19 19:40:35.887808 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:40:35.887812 | orchestrator | 2025-05-19 19:40:35.887816 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 19:40:35.887820 | orchestrator | 2025-05-19 19:40:35.887824 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 19:40:35.887828 | orchestrator | Monday 19 May 2025 19:39:53 +0000 (0:00:53.677) 0:01:47.260 ************ 2025-05-19 19:40:35.887832 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:40:35.887836 | orchestrator | 2025-05-19 19:40:35.887840 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 19:40:35.887844 | orchestrator | Monday 19 May 2025 19:39:54 +0000 (0:00:00.826) 0:01:48.087 ************ 2025-05-19 19:40:35.887848 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:40:35.887852 | orchestrator | 2025-05-19 19:40:35.887856 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 19:40:35.887860 | orchestrator | Monday 19 May 2025 19:39:54 +0000 (0:00:00.258) 0:01:48.345 ************ 2025-05-19 19:40:35.887868 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:40:35.887872 | orchestrator | 2025-05-19 19:40:35.887876 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 19:40:35.887880 | orchestrator | Monday 19 May 2025 19:39:57 +0000 (0:00:02.483) 0:01:50.829 ************ 2025-05-19 19:40:35.887884 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:40:35.887888 | orchestrator | 2025-05-19 19:40:35.887892 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 19:40:35.887896 | orchestrator | 2025-05-19 19:40:35.887900 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 19:40:35.887905 | orchestrator | Monday 19 May 2025 19:40:11 +0000 (0:00:14.684) 0:02:05.514 ************ 2025-05-19 19:40:35.887909 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:40:35.887913 | orchestrator | 2025-05-19 19:40:35.887917 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 19:40:35.887921 | orchestrator | Monday 19 May 2025 19:40:12 +0000 (0:00:00.739) 0:02:06.253 ************ 2025-05-19 19:40:35.887925 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:40:35.887929 | orchestrator | 2025-05-19 19:40:35.887933 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 19:40:35.887941 | orchestrator | Monday 19 May 2025 19:40:12 +0000 (0:00:00.342) 0:02:06.596 ************ 2025-05-19 19:40:35.887945 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:40:35.887949 | orchestrator | 2025-05-19 19:40:35.887953 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 19:40:35.887957 | orchestrator | Monday 19 May 2025 19:40:19 +0000 (0:00:06.935) 0:02:13.531 ************ 2025-05-19 19:40:35.887961 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:40:35.887965 | orchestrator | 2025-05-19 19:40:35.887969 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-19 19:40:35.887973 | orchestrator | 2025-05-19 19:40:35.887978 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-19 19:40:35.887982 | orchestrator | Monday 19 May 2025 19:40:31 +0000 (0:00:11.219) 0:02:24.750 ************ 2025-05-19 19:40:35.887986 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:40:35.887990 | orchestrator | 2025-05-19 19:40:35.887994 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-19 19:40:35.887998 | orchestrator | Monday 19 May 2025 19:40:31 +0000 (0:00:00.757) 0:02:25.507 ************ 2025-05-19 19:40:35.888002 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 19:40:35.888006 | orchestrator | enable_outward_rabbitmq_True 2025-05-19 19:40:35.888010 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 19:40:35.888014 | orchestrator | outward_rabbitmq_restart 2025-05-19 19:40:35.888019 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:40:35.888023 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:40:35.888027 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:40:35.888031 | orchestrator | 2025-05-19 19:40:35.888035 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-19 19:40:35.888039 | orchestrator | skipping: no hosts matched 2025-05-19 19:40:35.888043 | orchestrator | 2025-05-19 19:40:35.888047 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-19 19:40:35.888051 | orchestrator | skipping: no hosts matched 2025-05-19 19:40:35.888055 | orchestrator | 2025-05-19 19:40:35.888059 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-19 19:40:35.888063 | orchestrator | skipping: no hosts matched 2025-05-19 19:40:35.888067 | orchestrator | 2025-05-19 19:40:35.888071 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:40:35.888078 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-19 19:40:35.888103 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 19:40:35.888112 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:40:35.888117 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:40:35.888121 | orchestrator | 2025-05-19 19:40:35.888125 | orchestrator | 2025-05-19 19:40:35.888129 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:40:35.888133 | orchestrator | Monday 19 May 2025 19:40:34 +0000 (0:00:02.617) 0:02:28.125 ************ 2025-05-19 19:40:35.888137 | orchestrator | =============================================================================== 2025-05-19 19:40:35.888141 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.58s 2025-05-19 19:40:35.888146 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 17.07s 2025-05-19 19:40:35.888150 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.89s 2025-05-19 19:40:35.888154 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 4.04s 2025-05-19 19:40:35.888158 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.62s 2025-05-19 19:40:35.888162 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.52s 2025-05-19 19:40:35.888166 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.52s 2025-05-19 19:40:35.888170 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.15s 2025-05-19 19:40:35.888174 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.01s 2025-05-19 19:40:35.888179 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.91s 2025-05-19 19:40:35.888183 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.84s 2025-05-19 19:40:35.888187 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.68s 2025-05-19 19:40:35.888191 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.67s 2025-05-19 19:40:35.888195 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.66s 2025-05-19 19:40:35.888199 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.60s 2025-05-19 19:40:35.888203 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.42s 2025-05-19 19:40:35.888207 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.38s 2025-05-19 19:40:35.888211 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.36s 2025-05-19 19:40:35.888215 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.31s 2025-05-19 19:40:35.888220 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.15s 2025-05-19 19:40:38.916720 | orchestrator | 2025-05-19 19:40:38 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:38.916910 | orchestrator | 2025-05-19 19:40:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:38.917640 | orchestrator | 2025-05-19 19:40:38 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:38.918354 | orchestrator | 2025-05-19 19:40:38 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:38.918501 | orchestrator | 2025-05-19 19:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:41.976907 | orchestrator | 2025-05-19 19:40:41 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:41.981606 | orchestrator | 2025-05-19 19:40:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:41.984398 | orchestrator | 2025-05-19 19:40:41 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:41.987054 | orchestrator | 2025-05-19 19:40:41 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:41.987140 | orchestrator | 2025-05-19 19:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:45.042964 | orchestrator | 2025-05-19 19:40:45 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:45.044872 | orchestrator | 2025-05-19 19:40:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:45.048205 | orchestrator | 2025-05-19 19:40:45 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:45.052565 | orchestrator | 2025-05-19 19:40:45 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:45.052830 | orchestrator | 2025-05-19 19:40:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:48.105578 | orchestrator | 2025-05-19 19:40:48 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:48.111196 | orchestrator | 2025-05-19 19:40:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:48.112203 | orchestrator | 2025-05-19 19:40:48 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:48.115031 | orchestrator | 2025-05-19 19:40:48 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:48.115071 | orchestrator | 2025-05-19 19:40:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:51.154490 | orchestrator | 2025-05-19 19:40:51 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:51.155028 | orchestrator | 2025-05-19 19:40:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:51.155762 | orchestrator | 2025-05-19 19:40:51 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:51.157532 | orchestrator | 2025-05-19 19:40:51 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:51.157589 | orchestrator | 2025-05-19 19:40:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:54.208421 | orchestrator | 2025-05-19 19:40:54 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:54.210491 | orchestrator | 2025-05-19 19:40:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:54.210932 | orchestrator | 2025-05-19 19:40:54 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:54.215462 | orchestrator | 2025-05-19 19:40:54 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:54.215826 | orchestrator | 2025-05-19 19:40:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:40:57.275009 | orchestrator | 2025-05-19 19:40:57 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:40:57.275210 | orchestrator | 2025-05-19 19:40:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:40:57.275529 | orchestrator | 2025-05-19 19:40:57 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:40:57.278416 | orchestrator | 2025-05-19 19:40:57 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:40:57.278639 | orchestrator | 2025-05-19 19:40:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:00.325777 | orchestrator | 2025-05-19 19:41:00 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:00.326145 | orchestrator | 2025-05-19 19:41:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:00.326734 | orchestrator | 2025-05-19 19:41:00 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:00.327577 | orchestrator | 2025-05-19 19:41:00 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:00.327826 | orchestrator | 2025-05-19 19:41:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:03.373998 | orchestrator | 2025-05-19 19:41:03 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:03.374230 | orchestrator | 2025-05-19 19:41:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:03.374246 | orchestrator | 2025-05-19 19:41:03 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:03.374256 | orchestrator | 2025-05-19 19:41:03 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:03.374265 | orchestrator | 2025-05-19 19:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:06.408045 | orchestrator | 2025-05-19 19:41:06 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:06.408288 | orchestrator | 2025-05-19 19:41:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:06.408771 | orchestrator | 2025-05-19 19:41:06 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:06.409642 | orchestrator | 2025-05-19 19:41:06 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:06.409689 | orchestrator | 2025-05-19 19:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:09.461651 | orchestrator | 2025-05-19 19:41:09 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:09.462515 | orchestrator | 2025-05-19 19:41:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:09.464076 | orchestrator | 2025-05-19 19:41:09 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:09.465401 | orchestrator | 2025-05-19 19:41:09 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:09.465435 | orchestrator | 2025-05-19 19:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:12.525456 | orchestrator | 2025-05-19 19:41:12 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:12.525582 | orchestrator | 2025-05-19 19:41:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:12.526532 | orchestrator | 2025-05-19 19:41:12 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:12.528835 | orchestrator | 2025-05-19 19:41:12 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:12.528887 | orchestrator | 2025-05-19 19:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:15.592716 | orchestrator | 2025-05-19 19:41:15 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:15.594839 | orchestrator | 2025-05-19 19:41:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:15.595949 | orchestrator | 2025-05-19 19:41:15 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:15.598391 | orchestrator | 2025-05-19 19:41:15 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:15.598527 | orchestrator | 2025-05-19 19:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:18.656533 | orchestrator | 2025-05-19 19:41:18 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:18.656646 | orchestrator | 2025-05-19 19:41:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:18.660047 | orchestrator | 2025-05-19 19:41:18 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:18.663706 | orchestrator | 2025-05-19 19:41:18 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:18.663775 | orchestrator | 2025-05-19 19:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:21.707807 | orchestrator | 2025-05-19 19:41:21 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:21.707908 | orchestrator | 2025-05-19 19:41:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:21.707924 | orchestrator | 2025-05-19 19:41:21 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:21.707945 | orchestrator | 2025-05-19 19:41:21 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:21.707968 | orchestrator | 2025-05-19 19:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:24.776652 | orchestrator | 2025-05-19 19:41:24 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:24.777713 | orchestrator | 2025-05-19 19:41:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:24.779265 | orchestrator | 2025-05-19 19:41:24 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:24.781222 | orchestrator | 2025-05-19 19:41:24 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:24.781268 | orchestrator | 2025-05-19 19:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:27.838667 | orchestrator | 2025-05-19 19:41:27 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:27.838884 | orchestrator | 2025-05-19 19:41:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:27.841121 | orchestrator | 2025-05-19 19:41:27 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:27.841142 | orchestrator | 2025-05-19 19:41:27 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:27.841147 | orchestrator | 2025-05-19 19:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:30.881963 | orchestrator | 2025-05-19 19:41:30 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:30.882621 | orchestrator | 2025-05-19 19:41:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:30.883578 | orchestrator | 2025-05-19 19:41:30 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:30.884984 | orchestrator | 2025-05-19 19:41:30 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state STARTED 2025-05-19 19:41:30.885019 | orchestrator | 2025-05-19 19:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:33.939152 | orchestrator | 2025-05-19 19:41:33 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:33.941300 | orchestrator | 2025-05-19 19:41:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:33.943834 | orchestrator | 2025-05-19 19:41:33 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:33.948738 | orchestrator | 2025-05-19 19:41:33.948846 | orchestrator | 2025-05-19 19:41:33.948863 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:41:33.948876 | orchestrator | 2025-05-19 19:41:33.948888 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:41:33.948899 | orchestrator | Monday 19 May 2025 19:39:04 +0000 (0:00:00.402) 0:00:00.402 ************ 2025-05-19 19:41:33.948911 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:41:33.948923 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:41:33.948934 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:41:33.948945 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.948956 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.948967 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.948977 | orchestrator | 2025-05-19 19:41:33.948989 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:41:33.949000 | orchestrator | Monday 19 May 2025 19:39:04 +0000 (0:00:00.699) 0:00:01.102 ************ 2025-05-19 19:41:33.949011 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-19 19:41:33.949022 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-19 19:41:33.949033 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-19 19:41:33.949044 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-19 19:41:33.949054 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-19 19:41:33.949065 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-19 19:41:33.949075 | orchestrator | 2025-05-19 19:41:33.949111 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-19 19:41:33.949122 | orchestrator | 2025-05-19 19:41:33.949132 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-19 19:41:33.949143 | orchestrator | Monday 19 May 2025 19:39:06 +0000 (0:00:01.446) 0:00:02.548 ************ 2025-05-19 19:41:33.949155 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:41:33.949168 | orchestrator | 2025-05-19 19:41:33.949179 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-19 19:41:33.949190 | orchestrator | Monday 19 May 2025 19:39:08 +0000 (0:00:01.821) 0:00:04.370 ************ 2025-05-19 19:41:33.949203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949357 | orchestrator | 2025-05-19 19:41:33.949369 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-19 19:41:33.949381 | orchestrator | Monday 19 May 2025 19:39:09 +0000 (0:00:01.741) 0:00:06.111 ************ 2025-05-19 19:41:33.949394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949433 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949477 | orchestrator | 2025-05-19 19:41:33.949489 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-19 19:41:33.949500 | orchestrator | Monday 19 May 2025 19:39:11 +0000 (0:00:02.100) 0:00:08.211 ************ 2025-05-19 19:41:33.949517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949595 | orchestrator | 2025-05-19 19:41:33.949606 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-19 19:41:33.949617 | orchestrator | Monday 19 May 2025 19:39:13 +0000 (0:00:01.388) 0:00:09.600 ************ 2025-05-19 19:41:33.949628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949639 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949713 | orchestrator | 2025-05-19 19:41:33.949724 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-19 19:41:33.949735 | orchestrator | Monday 19 May 2025 19:39:15 +0000 (0:00:02.353) 0:00:11.954 ************ 2025-05-19 19:41:33.949745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.949826 | orchestrator | 2025-05-19 19:41:33.949837 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-19 19:41:33.949848 | orchestrator | Monday 19 May 2025 19:39:17 +0000 (0:00:01.736) 0:00:13.691 ************ 2025-05-19 19:41:33.949858 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:41:33.949870 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:41:33.949881 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.949891 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:41:33.949902 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.949913 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.949923 | orchestrator | 2025-05-19 19:41:33.949934 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-19 19:41:33.949944 | orchestrator | Monday 19 May 2025 19:39:20 +0000 (0:00:03.224) 0:00:16.915 ************ 2025-05-19 19:41:33.949955 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-19 19:41:33.949966 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-19 19:41:33.949977 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-19 19:41:33.949992 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-19 19:41:33.950004 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-19 19:41:33.950072 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-19 19:41:33.950120 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 19:41:33.950132 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 19:41:33.950143 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 19:41:33.950153 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 19:41:33.950164 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 19:41:33.950174 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 19:41:33.950185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 19:41:33.950198 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 19:41:33.950209 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 19:41:33.950229 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 19:41:33.950240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 19:41:33.950251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 19:41:33.950262 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 19:41:33.950274 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 19:41:33.950284 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 19:41:33.950295 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 19:41:33.950305 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 19:41:33.950322 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 19:41:33.950340 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 19:41:33.950357 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 19:41:33.950375 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 19:41:33.950391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 19:41:33.950409 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 19:41:33.950427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 19:41:33.950444 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 19:41:33.950463 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 19:41:33.950490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 19:41:33.950509 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 19:41:33.950526 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 19:41:33.950544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 19:41:33.950563 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 19:41:33.950582 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 19:41:33.950600 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 19:41:33.950619 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 19:41:33.950646 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 19:41:33.950665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 19:41:33.950683 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-19 19:41:33.950702 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-19 19:41:33.950732 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-19 19:41:33.950750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-19 19:41:33.950768 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-19 19:41:33.950786 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-19 19:41:33.950804 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 19:41:33.950823 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 19:41:33.950841 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 19:41:33.950859 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 19:41:33.950877 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 19:41:33.950894 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 19:41:33.950913 | orchestrator | 2025-05-19 19:41:33.950930 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 19:41:33.950949 | orchestrator | Monday 19 May 2025 19:39:41 +0000 (0:00:21.178) 0:00:38.093 ************ 2025-05-19 19:41:33.950968 | orchestrator | 2025-05-19 19:41:33.950986 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 19:41:33.951005 | orchestrator | Monday 19 May 2025 19:39:41 +0000 (0:00:00.071) 0:00:38.165 ************ 2025-05-19 19:41:33.951022 | orchestrator | 2025-05-19 19:41:33.951040 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 19:41:33.951059 | orchestrator | Monday 19 May 2025 19:39:42 +0000 (0:00:00.322) 0:00:38.487 ************ 2025-05-19 19:41:33.951077 | orchestrator | 2025-05-19 19:41:33.951136 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 19:41:33.951155 | orchestrator | Monday 19 May 2025 19:39:42 +0000 (0:00:00.054) 0:00:38.541 ************ 2025-05-19 19:41:33.951173 | orchestrator | 2025-05-19 19:41:33.951190 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 19:41:33.951207 | orchestrator | Monday 19 May 2025 19:39:42 +0000 (0:00:00.056) 0:00:38.598 ************ 2025-05-19 19:41:33.951225 | orchestrator | 2025-05-19 19:41:33.951242 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 19:41:33.951259 | orchestrator | Monday 19 May 2025 19:39:42 +0000 (0:00:00.055) 0:00:38.653 ************ 2025-05-19 19:41:33.951276 | orchestrator | 2025-05-19 19:41:33.951294 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-19 19:41:33.951311 | orchestrator | Monday 19 May 2025 19:39:42 +0000 (0:00:00.055) 0:00:38.708 ************ 2025-05-19 19:41:33.951329 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:41:33.951348 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:41:33.951366 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.951383 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:41:33.951401 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.951418 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.951435 | orchestrator | 2025-05-19 19:41:33.951452 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-19 19:41:33.951478 | orchestrator | Monday 19 May 2025 19:39:44 +0000 (0:00:02.186) 0:00:40.895 ************ 2025-05-19 19:41:33.951507 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.951526 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:41:33.951543 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.951559 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:41:33.951577 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:41:33.951594 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.951612 | orchestrator | 2025-05-19 19:41:33.951630 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-19 19:41:33.951647 | orchestrator | 2025-05-19 19:41:33.951664 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 19:41:33.951682 | orchestrator | Monday 19 May 2025 19:40:07 +0000 (0:00:22.568) 0:01:03.464 ************ 2025-05-19 19:41:33.951699 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:41:33.951717 | orchestrator | 2025-05-19 19:41:33.951735 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 19:41:33.951752 | orchestrator | Monday 19 May 2025 19:40:08 +0000 (0:00:00.989) 0:01:04.454 ************ 2025-05-19 19:41:33.951771 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:41:33.951788 | orchestrator | 2025-05-19 19:41:33.951819 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-19 19:41:33.951837 | orchestrator | Monday 19 May 2025 19:40:09 +0000 (0:00:01.102) 0:01:05.556 ************ 2025-05-19 19:41:33.951854 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.951871 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.951888 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.951906 | orchestrator | 2025-05-19 19:41:33.951923 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-19 19:41:33.951940 | orchestrator | Monday 19 May 2025 19:40:10 +0000 (0:00:01.158) 0:01:06.714 ************ 2025-05-19 19:41:33.951957 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.951975 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.951992 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.952008 | orchestrator | 2025-05-19 19:41:33.952026 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-19 19:41:33.952044 | orchestrator | Monday 19 May 2025 19:40:10 +0000 (0:00:00.339) 0:01:07.053 ************ 2025-05-19 19:41:33.952061 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.952078 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.952164 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.952182 | orchestrator | 2025-05-19 19:41:33.952200 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-19 19:41:33.952217 | orchestrator | Monday 19 May 2025 19:40:11 +0000 (0:00:00.426) 0:01:07.480 ************ 2025-05-19 19:41:33.952235 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.952252 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.952271 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.952288 | orchestrator | 2025-05-19 19:41:33.952305 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-19 19:41:33.952323 | orchestrator | Monday 19 May 2025 19:40:11 +0000 (0:00:00.621) 0:01:08.102 ************ 2025-05-19 19:41:33.952340 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.952357 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.952375 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.952390 | orchestrator | 2025-05-19 19:41:33.952407 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-19 19:41:33.952422 | orchestrator | Monday 19 May 2025 19:40:12 +0000 (0:00:00.444) 0:01:08.547 ************ 2025-05-19 19:41:33.952438 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.952453 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.952468 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.952484 | orchestrator | 2025-05-19 19:41:33.952501 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-19 19:41:33.952539 | orchestrator | Monday 19 May 2025 19:40:13 +0000 (0:00:00.773) 0:01:09.320 ************ 2025-05-19 19:41:33.952555 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.952570 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.952585 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.952600 | orchestrator | 2025-05-19 19:41:33.952616 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-19 19:41:33.952631 | orchestrator | Monday 19 May 2025 19:40:13 +0000 (0:00:00.752) 0:01:10.073 ************ 2025-05-19 19:41:33.952646 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.952660 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.952677 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.952694 | orchestrator | 2025-05-19 19:41:33.952710 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-19 19:41:33.952727 | orchestrator | Monday 19 May 2025 19:40:14 +0000 (0:00:00.455) 0:01:10.528 ************ 2025-05-19 19:41:33.952743 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.952758 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.952774 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.952789 | orchestrator | 2025-05-19 19:41:33.952804 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-19 19:41:33.952819 | orchestrator | Monday 19 May 2025 19:40:14 +0000 (0:00:00.271) 0:01:10.800 ************ 2025-05-19 19:41:33.952834 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.952850 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.952864 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.952880 | orchestrator | 2025-05-19 19:41:33.952895 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-19 19:41:33.952909 | orchestrator | Monday 19 May 2025 19:40:14 +0000 (0:00:00.326) 0:01:11.126 ************ 2025-05-19 19:41:33.952925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.952940 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.952955 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.952970 | orchestrator | 2025-05-19 19:41:33.952985 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-19 19:41:33.953002 | orchestrator | Monday 19 May 2025 19:40:15 +0000 (0:00:00.310) 0:01:11.437 ************ 2025-05-19 19:41:33.953018 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953035 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953060 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953075 | orchestrator | 2025-05-19 19:41:33.953114 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-19 19:41:33.953129 | orchestrator | Monday 19 May 2025 19:40:15 +0000 (0:00:00.392) 0:01:11.830 ************ 2025-05-19 19:41:33.953144 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953159 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953175 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953191 | orchestrator | 2025-05-19 19:41:33.953207 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-19 19:41:33.953222 | orchestrator | Monday 19 May 2025 19:40:15 +0000 (0:00:00.335) 0:01:12.166 ************ 2025-05-19 19:41:33.953237 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953252 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953268 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953283 | orchestrator | 2025-05-19 19:41:33.953299 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-19 19:41:33.953316 | orchestrator | Monday 19 May 2025 19:40:16 +0000 (0:00:00.432) 0:01:12.599 ************ 2025-05-19 19:41:33.953331 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953345 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953361 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953377 | orchestrator | 2025-05-19 19:41:33.953408 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-19 19:41:33.953422 | orchestrator | Monday 19 May 2025 19:40:16 +0000 (0:00:00.339) 0:01:12.938 ************ 2025-05-19 19:41:33.953448 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953461 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953474 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953487 | orchestrator | 2025-05-19 19:41:33.953500 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-19 19:41:33.953513 | orchestrator | Monday 19 May 2025 19:40:16 +0000 (0:00:00.225) 0:01:13.164 ************ 2025-05-19 19:41:33.953526 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953540 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953553 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953566 | orchestrator | 2025-05-19 19:41:33.953581 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 19:41:33.953598 | orchestrator | Monday 19 May 2025 19:40:17 +0000 (0:00:00.329) 0:01:13.494 ************ 2025-05-19 19:41:33.953613 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:41:33.953629 | orchestrator | 2025-05-19 19:41:33.953645 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-19 19:41:33.953660 | orchestrator | Monday 19 May 2025 19:40:17 +0000 (0:00:00.691) 0:01:14.186 ************ 2025-05-19 19:41:33.953676 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.953692 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.953707 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.953722 | orchestrator | 2025-05-19 19:41:33.953738 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-19 19:41:33.953754 | orchestrator | Monday 19 May 2025 19:40:18 +0000 (0:00:00.440) 0:01:14.627 ************ 2025-05-19 19:41:33.953770 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.953785 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.953800 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.953815 | orchestrator | 2025-05-19 19:41:33.953831 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-19 19:41:33.953847 | orchestrator | Monday 19 May 2025 19:40:18 +0000 (0:00:00.476) 0:01:15.103 ************ 2025-05-19 19:41:33.953863 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953880 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953897 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.953914 | orchestrator | 2025-05-19 19:41:33.953930 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-19 19:41:33.953946 | orchestrator | Monday 19 May 2025 19:40:19 +0000 (0:00:00.387) 0:01:15.491 ************ 2025-05-19 19:41:33.953961 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.953977 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.953992 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.954007 | orchestrator | 2025-05-19 19:41:33.954106 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-19 19:41:33.954125 | orchestrator | Monday 19 May 2025 19:40:19 +0000 (0:00:00.390) 0:01:15.881 ************ 2025-05-19 19:41:33.954141 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.954156 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.954171 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.954187 | orchestrator | 2025-05-19 19:41:33.954202 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-19 19:41:33.954220 | orchestrator | Monday 19 May 2025 19:40:19 +0000 (0:00:00.281) 0:01:16.163 ************ 2025-05-19 19:41:33.954237 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.954253 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.954267 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.954283 | orchestrator | 2025-05-19 19:41:33.954300 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-19 19:41:33.954316 | orchestrator | Monday 19 May 2025 19:40:20 +0000 (0:00:00.491) 0:01:16.654 ************ 2025-05-19 19:41:33.954333 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.954363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.954378 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.954394 | orchestrator | 2025-05-19 19:41:33.954411 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-19 19:41:33.954427 | orchestrator | Monday 19 May 2025 19:40:20 +0000 (0:00:00.405) 0:01:17.060 ************ 2025-05-19 19:41:33.954442 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.954458 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.954475 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.954491 | orchestrator | 2025-05-19 19:41:33.954507 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-19 19:41:33.954523 | orchestrator | Monday 19 May 2025 19:40:21 +0000 (0:00:00.544) 0:01:17.605 ************ 2025-05-19 19:41:33.954543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-05-19 19:41:33 | INFO  | Task 0d6cc13f-8763-4ca5-bfec-9befba671186 is in state SUCCESS 2025-05-19 19:41:33.954617 | orchestrator | 2025-05-19 19:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:33.954635 | orchestrator | olla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954885 | orchestrator | 2025-05-19 19:41:33.954902 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-19 19:41:33.954918 | orchestrator | Monday 19 May 2025 19:40:22 +0000 (0:00:01.645) 0:01:19.250 ************ 2025-05-19 19:41:33.954943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.954988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955208 | orchestrator | 2025-05-19 19:41:33.955226 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-19 19:41:33.955243 | orchestrator | Monday 19 May 2025 19:40:26 +0000 (0:00:03.973) 0:01:23.223 ************ 2025-05-19 19:41:33.955268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.955437 | orchestrator | 2025-05-19 19:41:33.955452 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 19:41:33.955467 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:02.679) 0:01:25.903 ************ 2025-05-19 19:41:33.955482 | orchestrator | 2025-05-19 19:41:33.955497 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 19:41:33.955512 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:00.053) 0:01:25.957 ************ 2025-05-19 19:41:33.955527 | orchestrator | 2025-05-19 19:41:33.955542 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 19:41:33.955556 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:00.052) 0:01:26.009 ************ 2025-05-19 19:41:33.955571 | orchestrator | 2025-05-19 19:41:33.955586 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-19 19:41:33.955602 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:00.064) 0:01:26.074 ************ 2025-05-19 19:41:33.955618 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.955633 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.955656 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.955671 | orchestrator | 2025-05-19 19:41:33.955684 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-19 19:41:33.955697 | orchestrator | Monday 19 May 2025 19:40:37 +0000 (0:00:08.171) 0:01:34.246 ************ 2025-05-19 19:41:33.955709 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.955722 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.955735 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.955748 | orchestrator | 2025-05-19 19:41:33.955761 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-19 19:41:33.955775 | orchestrator | Monday 19 May 2025 19:40:40 +0000 (0:00:02.760) 0:01:37.006 ************ 2025-05-19 19:41:33.955789 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.955803 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.955817 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.955832 | orchestrator | 2025-05-19 19:41:33.955845 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-19 19:41:33.955860 | orchestrator | Monday 19 May 2025 19:40:48 +0000 (0:00:07.926) 0:01:44.933 ************ 2025-05-19 19:41:33.955873 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.955887 | orchestrator | 2025-05-19 19:41:33.955901 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-19 19:41:33.955924 | orchestrator | Monday 19 May 2025 19:40:48 +0000 (0:00:00.119) 0:01:45.052 ************ 2025-05-19 19:41:33.955939 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.955955 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.955969 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.955982 | orchestrator | 2025-05-19 19:41:33.955997 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-19 19:41:33.956020 | orchestrator | Monday 19 May 2025 19:40:49 +0000 (0:00:00.962) 0:01:46.015 ************ 2025-05-19 19:41:33.956033 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.956046 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.956060 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.956072 | orchestrator | 2025-05-19 19:41:33.956108 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-19 19:41:33.956121 | orchestrator | Monday 19 May 2025 19:40:50 +0000 (0:00:00.748) 0:01:46.763 ************ 2025-05-19 19:41:33.956135 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.956148 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.956162 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.956174 | orchestrator | 2025-05-19 19:41:33.956187 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-19 19:41:33.956199 | orchestrator | Monday 19 May 2025 19:40:51 +0000 (0:00:01.208) 0:01:47.971 ************ 2025-05-19 19:41:33.956212 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.956224 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.956236 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.956249 | orchestrator | 2025-05-19 19:41:33.956262 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-19 19:41:33.956277 | orchestrator | Monday 19 May 2025 19:40:52 +0000 (0:00:00.737) 0:01:48.708 ************ 2025-05-19 19:41:33.956289 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.956302 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.956315 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.956327 | orchestrator | 2025-05-19 19:41:33.956341 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-19 19:41:33.956353 | orchestrator | Monday 19 May 2025 19:40:54 +0000 (0:00:01.608) 0:01:50.317 ************ 2025-05-19 19:41:33.956367 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.956380 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.956393 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.956406 | orchestrator | 2025-05-19 19:41:33.956420 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-19 19:41:33.956433 | orchestrator | Monday 19 May 2025 19:40:54 +0000 (0:00:00.798) 0:01:51.115 ************ 2025-05-19 19:41:33.956446 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.956459 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.956473 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.956486 | orchestrator | 2025-05-19 19:41:33.956498 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-19 19:41:33.956511 | orchestrator | Monday 19 May 2025 19:40:55 +0000 (0:00:00.596) 0:01:51.711 ************ 2025-05-19 19:41:33.956525 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956539 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956553 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956576 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956602 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956628 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956643 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956657 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956671 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956685 | orchestrator | 2025-05-19 19:41:33.956698 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-19 19:41:33.956712 | orchestrator | Monday 19 May 2025 19:40:57 +0000 (0:00:01.599) 0:01:53.311 ************ 2025-05-19 19:41:33.956726 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956740 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956754 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956769 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956882 | orchestrator | 2025-05-19 19:41:33.956895 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-19 19:41:33.956908 | orchestrator | Monday 19 May 2025 19:41:01 +0000 (0:00:04.766) 0:01:58.077 ************ 2025-05-19 19:41:33.956921 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956935 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956948 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956970 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.956988 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.957002 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.957023 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.957037 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.957050 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:41:33.957064 | orchestrator | 2025-05-19 19:41:33.957078 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 19:41:33.957117 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:03.566) 0:02:01.644 ************ 2025-05-19 19:41:33.957130 | orchestrator | 2025-05-19 19:41:33.957144 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 19:41:33.957159 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:00.074) 0:02:01.719 ************ 2025-05-19 19:41:33.957173 | orchestrator | 2025-05-19 19:41:33.957186 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 19:41:33.957200 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:00.299) 0:02:02.018 ************ 2025-05-19 19:41:33.957214 | orchestrator | 2025-05-19 19:41:33.957228 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-19 19:41:33.957242 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:00.167) 0:02:02.186 ************ 2025-05-19 19:41:33.957256 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.957271 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.957286 | orchestrator | 2025-05-19 19:41:33.957300 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-19 19:41:33.957314 | orchestrator | Monday 19 May 2025 19:41:12 +0000 (0:00:06.610) 0:02:08.797 ************ 2025-05-19 19:41:33.957345 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.957361 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.957376 | orchestrator | 2025-05-19 19:41:33.957391 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-19 19:41:33.957405 | orchestrator | Monday 19 May 2025 19:41:19 +0000 (0:00:06.942) 0:02:15.739 ************ 2025-05-19 19:41:33.957421 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:41:33.957435 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:41:33.957449 | orchestrator | 2025-05-19 19:41:33.957464 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-19 19:41:33.957479 | orchestrator | Monday 19 May 2025 19:41:25 +0000 (0:00:06.270) 0:02:22.009 ************ 2025-05-19 19:41:33.957494 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:41:33.957508 | orchestrator | 2025-05-19 19:41:33.957523 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-19 19:41:33.957537 | orchestrator | Monday 19 May 2025 19:41:26 +0000 (0:00:00.292) 0:02:22.301 ************ 2025-05-19 19:41:33.957552 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.957566 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.957581 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.957595 | orchestrator | 2025-05-19 19:41:33.957610 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-19 19:41:33.957624 | orchestrator | Monday 19 May 2025 19:41:26 +0000 (0:00:00.898) 0:02:23.199 ************ 2025-05-19 19:41:33.957638 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.957653 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.957667 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.957681 | orchestrator | 2025-05-19 19:41:33.957697 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-19 19:41:33.957712 | orchestrator | Monday 19 May 2025 19:41:27 +0000 (0:00:00.700) 0:02:23.900 ************ 2025-05-19 19:41:33.957728 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.957743 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.957758 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.957772 | orchestrator | 2025-05-19 19:41:33.957785 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-19 19:41:33.957806 | orchestrator | Monday 19 May 2025 19:41:28 +0000 (0:00:01.089) 0:02:24.990 ************ 2025-05-19 19:41:33.957819 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:41:33.957834 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:41:33.957847 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:41:33.957861 | orchestrator | 2025-05-19 19:41:33.957876 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-19 19:41:33.957889 | orchestrator | Monday 19 May 2025 19:41:29 +0000 (0:00:00.856) 0:02:25.846 ************ 2025-05-19 19:41:33.957904 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.957917 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.957930 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.957944 | orchestrator | 2025-05-19 19:41:33.957957 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-19 19:41:33.957972 | orchestrator | Monday 19 May 2025 19:41:30 +0000 (0:00:00.796) 0:02:26.643 ************ 2025-05-19 19:41:33.957985 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:41:33.957999 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:41:33.958013 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:41:33.958163 | orchestrator | 2025-05-19 19:41:33.958177 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:41:33.958203 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-19 19:41:33.958219 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-19 19:41:33.958233 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-19 19:41:33.958260 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:41:33.958274 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:41:33.958287 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:41:33.958301 | orchestrator | 2025-05-19 19:41:33.958314 | orchestrator | 2025-05-19 19:41:33.958328 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:41:33.958343 | orchestrator | Monday 19 May 2025 19:41:31 +0000 (0:00:01.277) 0:02:27.921 ************ 2025-05-19 19:41:33.958356 | orchestrator | =============================================================================== 2025-05-19 19:41:33.958370 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 22.57s 2025-05-19 19:41:33.958383 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.18s 2025-05-19 19:41:33.958396 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.78s 2025-05-19 19:41:33.958409 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.20s 2025-05-19 19:41:33.958422 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.70s 2025-05-19 19:41:33.958436 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.77s 2025-05-19 19:41:33.958450 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.97s 2025-05-19 19:41:33.958465 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.57s 2025-05-19 19:41:33.958479 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.22s 2025-05-19 19:41:33.958494 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.68s 2025-05-19 19:41:33.958508 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.35s 2025-05-19 19:41:33.958522 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.19s 2025-05-19 19:41:33.958535 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.10s 2025-05-19 19:41:33.958548 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.82s 2025-05-19 19:41:33.958561 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.74s 2025-05-19 19:41:33.958575 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.74s 2025-05-19 19:41:33.958588 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.65s 2025-05-19 19:41:33.958600 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.61s 2025-05-19 19:41:33.958614 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.60s 2025-05-19 19:41:33.958628 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.45s 2025-05-19 19:41:37.004987 | orchestrator | 2025-05-19 19:41:37 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:37.007234 | orchestrator | 2025-05-19 19:41:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:37.010821 | orchestrator | 2025-05-19 19:41:37 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:37.010864 | orchestrator | 2025-05-19 19:41:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:40.060313 | orchestrator | 2025-05-19 19:41:40 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:40.061379 | orchestrator | 2025-05-19 19:41:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:40.061989 | orchestrator | 2025-05-19 19:41:40 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:40.062104 | orchestrator | 2025-05-19 19:41:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:43.123824 | orchestrator | 2025-05-19 19:41:43 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:43.125918 | orchestrator | 2025-05-19 19:41:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:43.127171 | orchestrator | 2025-05-19 19:41:43 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:43.128828 | orchestrator | 2025-05-19 19:41:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:46.204238 | orchestrator | 2025-05-19 19:41:46 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:46.204385 | orchestrator | 2025-05-19 19:41:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:46.204405 | orchestrator | 2025-05-19 19:41:46 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:46.204417 | orchestrator | 2025-05-19 19:41:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:49.266573 | orchestrator | 2025-05-19 19:41:49 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:49.268497 | orchestrator | 2025-05-19 19:41:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:49.270746 | orchestrator | 2025-05-19 19:41:49 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:49.272069 | orchestrator | 2025-05-19 19:41:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:52.315496 | orchestrator | 2025-05-19 19:41:52 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:52.316041 | orchestrator | 2025-05-19 19:41:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:52.316737 | orchestrator | 2025-05-19 19:41:52 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:52.316760 | orchestrator | 2025-05-19 19:41:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:55.358955 | orchestrator | 2025-05-19 19:41:55 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:55.359819 | orchestrator | 2025-05-19 19:41:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:55.361192 | orchestrator | 2025-05-19 19:41:55 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:55.361264 | orchestrator | 2025-05-19 19:41:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:41:58.413221 | orchestrator | 2025-05-19 19:41:58 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:41:58.413349 | orchestrator | 2025-05-19 19:41:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:41:58.415433 | orchestrator | 2025-05-19 19:41:58 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:41:58.415488 | orchestrator | 2025-05-19 19:41:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:01.466641 | orchestrator | 2025-05-19 19:42:01 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:01.467027 | orchestrator | 2025-05-19 19:42:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:01.468189 | orchestrator | 2025-05-19 19:42:01 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:01.468249 | orchestrator | 2025-05-19 19:42:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:04.518411 | orchestrator | 2025-05-19 19:42:04 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:04.518818 | orchestrator | 2025-05-19 19:42:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:04.519752 | orchestrator | 2025-05-19 19:42:04 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:04.519849 | orchestrator | 2025-05-19 19:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:07.564700 | orchestrator | 2025-05-19 19:42:07 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:07.564794 | orchestrator | 2025-05-19 19:42:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:07.564804 | orchestrator | 2025-05-19 19:42:07 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:07.564813 | orchestrator | 2025-05-19 19:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:10.592391 | orchestrator | 2025-05-19 19:42:10 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:10.592571 | orchestrator | 2025-05-19 19:42:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:10.593337 | orchestrator | 2025-05-19 19:42:10 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:10.593371 | orchestrator | 2025-05-19 19:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:13.634377 | orchestrator | 2025-05-19 19:42:13 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:13.636073 | orchestrator | 2025-05-19 19:42:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:13.637096 | orchestrator | 2025-05-19 19:42:13 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:13.637125 | orchestrator | 2025-05-19 19:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:16.687677 | orchestrator | 2025-05-19 19:42:16 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:16.688613 | orchestrator | 2025-05-19 19:42:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:16.689525 | orchestrator | 2025-05-19 19:42:16 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:16.689581 | orchestrator | 2025-05-19 19:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:19.738693 | orchestrator | 2025-05-19 19:42:19 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:19.740601 | orchestrator | 2025-05-19 19:42:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:19.742245 | orchestrator | 2025-05-19 19:42:19 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:19.742275 | orchestrator | 2025-05-19 19:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:22.796586 | orchestrator | 2025-05-19 19:42:22 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:22.797445 | orchestrator | 2025-05-19 19:42:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:22.797482 | orchestrator | 2025-05-19 19:42:22 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:22.797494 | orchestrator | 2025-05-19 19:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:25.832330 | orchestrator | 2025-05-19 19:42:25 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:25.832462 | orchestrator | 2025-05-19 19:42:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:25.832478 | orchestrator | 2025-05-19 19:42:25 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:25.832491 | orchestrator | 2025-05-19 19:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:28.873865 | orchestrator | 2025-05-19 19:42:28 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:28.874885 | orchestrator | 2025-05-19 19:42:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:28.876577 | orchestrator | 2025-05-19 19:42:28 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:28.876602 | orchestrator | 2025-05-19 19:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:31.925976 | orchestrator | 2025-05-19 19:42:31 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:31.928021 | orchestrator | 2025-05-19 19:42:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:31.929699 | orchestrator | 2025-05-19 19:42:31 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:31.930182 | orchestrator | 2025-05-19 19:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:34.982615 | orchestrator | 2025-05-19 19:42:34 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:34.983391 | orchestrator | 2025-05-19 19:42:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:34.984348 | orchestrator | 2025-05-19 19:42:34 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:34.984556 | orchestrator | 2025-05-19 19:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:38.034954 | orchestrator | 2025-05-19 19:42:38 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:38.035137 | orchestrator | 2025-05-19 19:42:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:38.036185 | orchestrator | 2025-05-19 19:42:38 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:38.036214 | orchestrator | 2025-05-19 19:42:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:41.090258 | orchestrator | 2025-05-19 19:42:41 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:41.091201 | orchestrator | 2025-05-19 19:42:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:41.092324 | orchestrator | 2025-05-19 19:42:41 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:41.092571 | orchestrator | 2025-05-19 19:42:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:44.133471 | orchestrator | 2025-05-19 19:42:44 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:44.135367 | orchestrator | 2025-05-19 19:42:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:44.137973 | orchestrator | 2025-05-19 19:42:44 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:44.138597 | orchestrator | 2025-05-19 19:42:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:47.187375 | orchestrator | 2025-05-19 19:42:47 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:47.188983 | orchestrator | 2025-05-19 19:42:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:47.190476 | orchestrator | 2025-05-19 19:42:47 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:47.190559 | orchestrator | 2025-05-19 19:42:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:50.239124 | orchestrator | 2025-05-19 19:42:50 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:50.241710 | orchestrator | 2025-05-19 19:42:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:50.243119 | orchestrator | 2025-05-19 19:42:50 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:50.243504 | orchestrator | 2025-05-19 19:42:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:53.290118 | orchestrator | 2025-05-19 19:42:53 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:53.291811 | orchestrator | 2025-05-19 19:42:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:53.293000 | orchestrator | 2025-05-19 19:42:53 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:53.293037 | orchestrator | 2025-05-19 19:42:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:56.331231 | orchestrator | 2025-05-19 19:42:56 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:56.331356 | orchestrator | 2025-05-19 19:42:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:56.332968 | orchestrator | 2025-05-19 19:42:56 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:56.333102 | orchestrator | 2025-05-19 19:42:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:42:59.396309 | orchestrator | 2025-05-19 19:42:59 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:42:59.398467 | orchestrator | 2025-05-19 19:42:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:42:59.400241 | orchestrator | 2025-05-19 19:42:59 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:42:59.400722 | orchestrator | 2025-05-19 19:42:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:02.455499 | orchestrator | 2025-05-19 19:43:02 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:02.464304 | orchestrator | 2025-05-19 19:43:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:02.466483 | orchestrator | 2025-05-19 19:43:02 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:02.466704 | orchestrator | 2025-05-19 19:43:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:05.523437 | orchestrator | 2025-05-19 19:43:05 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:05.524516 | orchestrator | 2025-05-19 19:43:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:05.525963 | orchestrator | 2025-05-19 19:43:05 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:05.526531 | orchestrator | 2025-05-19 19:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:08.571151 | orchestrator | 2025-05-19 19:43:08 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:08.571752 | orchestrator | 2025-05-19 19:43:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:08.573371 | orchestrator | 2025-05-19 19:43:08 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:08.573407 | orchestrator | 2025-05-19 19:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:11.631338 | orchestrator | 2025-05-19 19:43:11 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:11.632570 | orchestrator | 2025-05-19 19:43:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:11.634303 | orchestrator | 2025-05-19 19:43:11 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:11.634341 | orchestrator | 2025-05-19 19:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:14.689426 | orchestrator | 2025-05-19 19:43:14 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:14.690554 | orchestrator | 2025-05-19 19:43:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:14.691636 | orchestrator | 2025-05-19 19:43:14 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:14.691723 | orchestrator | 2025-05-19 19:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:17.744453 | orchestrator | 2025-05-19 19:43:17 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:17.745339 | orchestrator | 2025-05-19 19:43:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:17.745955 | orchestrator | 2025-05-19 19:43:17 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:17.745977 | orchestrator | 2025-05-19 19:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:20.806795 | orchestrator | 2025-05-19 19:43:20 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:20.806903 | orchestrator | 2025-05-19 19:43:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:20.809413 | orchestrator | 2025-05-19 19:43:20 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:20.809464 | orchestrator | 2025-05-19 19:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:23.847492 | orchestrator | 2025-05-19 19:43:23 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:23.848408 | orchestrator | 2025-05-19 19:43:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:23.849451 | orchestrator | 2025-05-19 19:43:23 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:23.849499 | orchestrator | 2025-05-19 19:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:26.886581 | orchestrator | 2025-05-19 19:43:26 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:26.886769 | orchestrator | 2025-05-19 19:43:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:26.887267 | orchestrator | 2025-05-19 19:43:26 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:26.887297 | orchestrator | 2025-05-19 19:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:29.946272 | orchestrator | 2025-05-19 19:43:29 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:29.948219 | orchestrator | 2025-05-19 19:43:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:29.949329 | orchestrator | 2025-05-19 19:43:29 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:29.949370 | orchestrator | 2025-05-19 19:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:32.990745 | orchestrator | 2025-05-19 19:43:32 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:32.992488 | orchestrator | 2025-05-19 19:43:32 | INFO  | Task baac6c39-528f-429e-84ab-2755a50b2fba is in state STARTED 2025-05-19 19:43:32.993845 | orchestrator | 2025-05-19 19:43:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:32.995548 | orchestrator | 2025-05-19 19:43:32 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:32.995580 | orchestrator | 2025-05-19 19:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:36.123002 | orchestrator | 2025-05-19 19:43:36 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:36.123202 | orchestrator | 2025-05-19 19:43:36 | INFO  | Task baac6c39-528f-429e-84ab-2755a50b2fba is in state STARTED 2025-05-19 19:43:36.123305 | orchestrator | 2025-05-19 19:43:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:36.125461 | orchestrator | 2025-05-19 19:43:36 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:36.125511 | orchestrator | 2025-05-19 19:43:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:39.222851 | orchestrator | 2025-05-19 19:43:39 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:39.223220 | orchestrator | 2025-05-19 19:43:39 | INFO  | Task baac6c39-528f-429e-84ab-2755a50b2fba is in state STARTED 2025-05-19 19:43:39.223994 | orchestrator | 2025-05-19 19:43:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:39.224799 | orchestrator | 2025-05-19 19:43:39 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:39.224897 | orchestrator | 2025-05-19 19:43:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:42.259602 | orchestrator | 2025-05-19 19:43:42 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:42.260494 | orchestrator | 2025-05-19 19:43:42 | INFO  | Task baac6c39-528f-429e-84ab-2755a50b2fba is in state SUCCESS 2025-05-19 19:43:42.262169 | orchestrator | 2025-05-19 19:43:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:42.263350 | orchestrator | 2025-05-19 19:43:42 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:42.263434 | orchestrator | 2025-05-19 19:43:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:45.311475 | orchestrator | 2025-05-19 19:43:45 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:45.312573 | orchestrator | 2025-05-19 19:43:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:45.315199 | orchestrator | 2025-05-19 19:43:45 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:45.315295 | orchestrator | 2025-05-19 19:43:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:48.375180 | orchestrator | 2025-05-19 19:43:48 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:48.375464 | orchestrator | 2025-05-19 19:43:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:48.376558 | orchestrator | 2025-05-19 19:43:48 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:48.376587 | orchestrator | 2025-05-19 19:43:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:51.430286 | orchestrator | 2025-05-19 19:43:51 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:51.431455 | orchestrator | 2025-05-19 19:43:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:51.431504 | orchestrator | 2025-05-19 19:43:51 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:51.431899 | orchestrator | 2025-05-19 19:43:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:54.475177 | orchestrator | 2025-05-19 19:43:54 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:54.480531 | orchestrator | 2025-05-19 19:43:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:54.481734 | orchestrator | 2025-05-19 19:43:54 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:54.481959 | orchestrator | 2025-05-19 19:43:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:43:57.539985 | orchestrator | 2025-05-19 19:43:57 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:43:57.540602 | orchestrator | 2025-05-19 19:43:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:43:57.541292 | orchestrator | 2025-05-19 19:43:57 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:43:57.541322 | orchestrator | 2025-05-19 19:43:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:00.574666 | orchestrator | 2025-05-19 19:44:00 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:00.574934 | orchestrator | 2025-05-19 19:44:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:00.576275 | orchestrator | 2025-05-19 19:44:00 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:00.576385 | orchestrator | 2025-05-19 19:44:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:03.638781 | orchestrator | 2025-05-19 19:44:03 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:03.640193 | orchestrator | 2025-05-19 19:44:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:03.642346 | orchestrator | 2025-05-19 19:44:03 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:03.642609 | orchestrator | 2025-05-19 19:44:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:06.694391 | orchestrator | 2025-05-19 19:44:06 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:06.694535 | orchestrator | 2025-05-19 19:44:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:06.694625 | orchestrator | 2025-05-19 19:44:06 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:06.694642 | orchestrator | 2025-05-19 19:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:09.738199 | orchestrator | 2025-05-19 19:44:09 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:09.739632 | orchestrator | 2025-05-19 19:44:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:09.741290 | orchestrator | 2025-05-19 19:44:09 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:09.741448 | orchestrator | 2025-05-19 19:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:12.794186 | orchestrator | 2025-05-19 19:44:12 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:12.794642 | orchestrator | 2025-05-19 19:44:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:12.795422 | orchestrator | 2025-05-19 19:44:12 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:12.795432 | orchestrator | 2025-05-19 19:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:15.844718 | orchestrator | 2025-05-19 19:44:15 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:15.846119 | orchestrator | 2025-05-19 19:44:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:15.847159 | orchestrator | 2025-05-19 19:44:15 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:15.847205 | orchestrator | 2025-05-19 19:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:18.899545 | orchestrator | 2025-05-19 19:44:18 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:18.900394 | orchestrator | 2025-05-19 19:44:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:18.903258 | orchestrator | 2025-05-19 19:44:18 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:18.903307 | orchestrator | 2025-05-19 19:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:21.951655 | orchestrator | 2025-05-19 19:44:21 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:21.953512 | orchestrator | 2025-05-19 19:44:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:21.955091 | orchestrator | 2025-05-19 19:44:21 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:21.955400 | orchestrator | 2025-05-19 19:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:25.015917 | orchestrator | 2025-05-19 19:44:25 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:25.017523 | orchestrator | 2025-05-19 19:44:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:25.022249 | orchestrator | 2025-05-19 19:44:25 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:25.022345 | orchestrator | 2025-05-19 19:44:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:28.066499 | orchestrator | 2025-05-19 19:44:28 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:28.067529 | orchestrator | 2025-05-19 19:44:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:28.069381 | orchestrator | 2025-05-19 19:44:28 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:28.069504 | orchestrator | 2025-05-19 19:44:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:31.111347 | orchestrator | 2025-05-19 19:44:31 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:31.112537 | orchestrator | 2025-05-19 19:44:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:31.112584 | orchestrator | 2025-05-19 19:44:31 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:31.112626 | orchestrator | 2025-05-19 19:44:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:34.154752 | orchestrator | 2025-05-19 19:44:34 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:34.155344 | orchestrator | 2025-05-19 19:44:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:34.156483 | orchestrator | 2025-05-19 19:44:34 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:34.156533 | orchestrator | 2025-05-19 19:44:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:37.217270 | orchestrator | 2025-05-19 19:44:37 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:37.218340 | orchestrator | 2025-05-19 19:44:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:37.218917 | orchestrator | 2025-05-19 19:44:37 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:37.218939 | orchestrator | 2025-05-19 19:44:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:40.268108 | orchestrator | 2025-05-19 19:44:40 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:40.269018 | orchestrator | 2025-05-19 19:44:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:40.270265 | orchestrator | 2025-05-19 19:44:40 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:40.270320 | orchestrator | 2025-05-19 19:44:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:43.323447 | orchestrator | 2025-05-19 19:44:43 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:43.324699 | orchestrator | 2025-05-19 19:44:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:43.326007 | orchestrator | 2025-05-19 19:44:43 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:43.326191 | orchestrator | 2025-05-19 19:44:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:46.366559 | orchestrator | 2025-05-19 19:44:46 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:46.367258 | orchestrator | 2025-05-19 19:44:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:46.374828 | orchestrator | 2025-05-19 19:44:46 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:46.374918 | orchestrator | 2025-05-19 19:44:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:49.416489 | orchestrator | 2025-05-19 19:44:49 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:49.416628 | orchestrator | 2025-05-19 19:44:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:49.416731 | orchestrator | 2025-05-19 19:44:49 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:49.416753 | orchestrator | 2025-05-19 19:44:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:52.471156 | orchestrator | 2025-05-19 19:44:52 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:52.472457 | orchestrator | 2025-05-19 19:44:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:52.474338 | orchestrator | 2025-05-19 19:44:52 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:52.474422 | orchestrator | 2025-05-19 19:44:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:55.515111 | orchestrator | 2025-05-19 19:44:55 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:55.515490 | orchestrator | 2025-05-19 19:44:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:55.516144 | orchestrator | 2025-05-19 19:44:55 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:55.516225 | orchestrator | 2025-05-19 19:44:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:44:58.566291 | orchestrator | 2025-05-19 19:44:58 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:44:58.567678 | orchestrator | 2025-05-19 19:44:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:44:58.569743 | orchestrator | 2025-05-19 19:44:58 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:44:58.569813 | orchestrator | 2025-05-19 19:44:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:01.608692 | orchestrator | 2025-05-19 19:45:01 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:01.610232 | orchestrator | 2025-05-19 19:45:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:01.611565 | orchestrator | 2025-05-19 19:45:01 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:01.611783 | orchestrator | 2025-05-19 19:45:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:04.666380 | orchestrator | 2025-05-19 19:45:04 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:04.666575 | orchestrator | 2025-05-19 19:45:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:04.667724 | orchestrator | 2025-05-19 19:45:04 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:04.667764 | orchestrator | 2025-05-19 19:45:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:07.719516 | orchestrator | 2025-05-19 19:45:07 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:07.721629 | orchestrator | 2025-05-19 19:45:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:07.723668 | orchestrator | 2025-05-19 19:45:07 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:07.723702 | orchestrator | 2025-05-19 19:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:10.779217 | orchestrator | 2025-05-19 19:45:10 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:10.781274 | orchestrator | 2025-05-19 19:45:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:10.784277 | orchestrator | 2025-05-19 19:45:10 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:10.784338 | orchestrator | 2025-05-19 19:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:13.833918 | orchestrator | 2025-05-19 19:45:13 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:13.834634 | orchestrator | 2025-05-19 19:45:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:13.836525 | orchestrator | 2025-05-19 19:45:13 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:13.836565 | orchestrator | 2025-05-19 19:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:16.890404 | orchestrator | 2025-05-19 19:45:16 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:16.895742 | orchestrator | 2025-05-19 19:45:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:16.895819 | orchestrator | 2025-05-19 19:45:16 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:16.895826 | orchestrator | 2025-05-19 19:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:19.947226 | orchestrator | 2025-05-19 19:45:19 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:19.948385 | orchestrator | 2025-05-19 19:45:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:19.949987 | orchestrator | 2025-05-19 19:45:19 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:19.950139 | orchestrator | 2025-05-19 19:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:23.004115 | orchestrator | 2025-05-19 19:45:22 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:23.005597 | orchestrator | 2025-05-19 19:45:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:23.007517 | orchestrator | 2025-05-19 19:45:23 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:23.008211 | orchestrator | 2025-05-19 19:45:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:26.052345 | orchestrator | 2025-05-19 19:45:26 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state STARTED 2025-05-19 19:45:26.052473 | orchestrator | 2025-05-19 19:45:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:26.052498 | orchestrator | 2025-05-19 19:45:26 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:26.052517 | orchestrator | 2025-05-19 19:45:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:29.108878 | orchestrator | 2025-05-19 19:45:29 | INFO  | Task d670f4a6-f68c-4e52-bfc9-35bb887844d2 is in state SUCCESS 2025-05-19 19:45:29.110360 | orchestrator | 2025-05-19 19:45:29.110414 | orchestrator | None 2025-05-19 19:45:29.110428 | orchestrator | 2025-05-19 19:45:29.110440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:45:29.110452 | orchestrator | 2025-05-19 19:45:29.110463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:45:29.110475 | orchestrator | Monday 19 May 2025 19:37:41 +0000 (0:00:00.353) 0:00:00.353 ************ 2025-05-19 19:45:29.110486 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.110499 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.110510 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.110521 | orchestrator | 2025-05-19 19:45:29.110532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:45:29.110543 | orchestrator | Monday 19 May 2025 19:37:41 +0000 (0:00:00.467) 0:00:00.821 ************ 2025-05-19 19:45:29.110555 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-19 19:45:29.110566 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-19 19:45:29.110577 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-19 19:45:29.110587 | orchestrator | 2025-05-19 19:45:29.110597 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-19 19:45:29.110607 | orchestrator | 2025-05-19 19:45:29.110616 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-19 19:45:29.110626 | orchestrator | Monday 19 May 2025 19:37:42 +0000 (0:00:00.348) 0:00:01.169 ************ 2025-05-19 19:45:29.110636 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.110665 | orchestrator | 2025-05-19 19:45:29.110675 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-19 19:45:29.110685 | orchestrator | Monday 19 May 2025 19:37:43 +0000 (0:00:00.824) 0:00:01.993 ************ 2025-05-19 19:45:29.110694 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.110704 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.110713 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.110722 | orchestrator | 2025-05-19 19:45:29.110732 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-19 19:45:29.110741 | orchestrator | Monday 19 May 2025 19:37:44 +0000 (0:00:00.998) 0:00:02.992 ************ 2025-05-19 19:45:29.110751 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.110761 | orchestrator | 2025-05-19 19:45:29.110770 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-19 19:45:29.110780 | orchestrator | Monday 19 May 2025 19:37:45 +0000 (0:00:01.654) 0:00:04.646 ************ 2025-05-19 19:45:29.110789 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.110799 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.110808 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.110817 | orchestrator | 2025-05-19 19:45:29.110827 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-19 19:45:29.110836 | orchestrator | Monday 19 May 2025 19:37:46 +0000 (0:00:01.088) 0:00:05.735 ************ 2025-05-19 19:45:29.110846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 19:45:29.110868 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 19:45:29.110878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 19:45:29.110888 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 19:45:29.110897 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 19:45:29.110906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 19:45:29.110946 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 19:45:29.110960 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 19:45:29.110972 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 19:45:29.110982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 19:45:29.110993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 19:45:29.111004 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 19:45:29.111015 | orchestrator | 2025-05-19 19:45:29.111026 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 19:45:29.111037 | orchestrator | Monday 19 May 2025 19:37:49 +0000 (0:00:02.656) 0:00:08.391 ************ 2025-05-19 19:45:29.111049 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-19 19:45:29.111060 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-19 19:45:29.111070 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-19 19:45:29.111081 | orchestrator | 2025-05-19 19:45:29.111092 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 19:45:29.111103 | orchestrator | Monday 19 May 2025 19:37:50 +0000 (0:00:01.098) 0:00:09.489 ************ 2025-05-19 19:45:29.111114 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-19 19:45:29.111125 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-19 19:45:29.111260 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-19 19:45:29.111273 | orchestrator | 2025-05-19 19:45:29.111285 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 19:45:29.111303 | orchestrator | Monday 19 May 2025 19:37:52 +0000 (0:00:02.092) 0:00:11.582 ************ 2025-05-19 19:45:29.111313 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-19 19:45:29.111323 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.111344 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-19 19:45:29.111354 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.111364 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-19 19:45:29.111373 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.111382 | orchestrator | 2025-05-19 19:45:29.111392 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-19 19:45:29.111401 | orchestrator | Monday 19 May 2025 19:37:53 +0000 (0:00:01.053) 0:00:12.636 ************ 2025-05-19 19:45:29.111414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.111429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.111445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.111456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.111466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.111489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.111500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.111511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.111521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.111536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.111547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.111557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.111572 | orchestrator | 2025-05-19 19:45:29.111582 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-19 19:45:29.111592 | orchestrator | Monday 19 May 2025 19:37:56 +0000 (0:00:02.609) 0:00:15.245 ************ 2025-05-19 19:45:29.111602 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.111611 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.111621 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.111630 | orchestrator | 2025-05-19 19:45:29.111645 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-19 19:45:29.111655 | orchestrator | Monday 19 May 2025 19:37:58 +0000 (0:00:02.300) 0:00:17.546 ************ 2025-05-19 19:45:29.111665 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-19 19:45:29.111674 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-19 19:45:29.111684 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-19 19:45:29.111693 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-19 19:45:29.111703 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-19 19:45:29.111712 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-19 19:45:29.111721 | orchestrator | 2025-05-19 19:45:29.111731 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-19 19:45:29.111740 | orchestrator | Monday 19 May 2025 19:38:03 +0000 (0:00:05.088) 0:00:22.635 ************ 2025-05-19 19:45:29.111750 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.111759 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.111768 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.111778 | orchestrator | 2025-05-19 19:45:29.111787 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-19 19:45:29.111797 | orchestrator | Monday 19 May 2025 19:38:05 +0000 (0:00:01.702) 0:00:24.337 ************ 2025-05-19 19:45:29.111806 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.111816 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.111825 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.111835 | orchestrator | 2025-05-19 19:45:29.111844 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-19 19:45:29.111853 | orchestrator | Monday 19 May 2025 19:38:07 +0000 (0:00:01.838) 0:00:26.176 ************ 2025-05-19 19:45:29.111863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.111879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.111897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.111907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.111951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.111971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.111988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.112005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.112029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.112049 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.112059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.112069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.112079 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.112095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.112105 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.112115 | orchestrator | 2025-05-19 19:45:29.112125 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-19 19:45:29.112135 | orchestrator | Monday 19 May 2025 19:38:09 +0000 (0:00:02.168) 0:00:28.345 ************ 2025-05-19 19:45:29.112145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.112222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.112232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.112257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.112278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.112375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.112387 | orchestrator | 2025-05-19 19:45:29.112397 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-19 19:45:29.112407 | orchestrator | Monday 19 May 2025 19:38:15 +0000 (0:00:06.376) 0:00:34.722 ************ 2025-05-19 19:45:29.112417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.112469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.113414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.113443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.113454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.113476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.113490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.113501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.113511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.113521 | orchestrator | 2025-05-19 19:45:29.113531 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-19 19:45:29.113542 | orchestrator | Monday 19 May 2025 19:38:19 +0000 (0:00:03.516) 0:00:38.238 ************ 2025-05-19 19:45:29.113559 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 19:45:29.113569 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 19:45:29.113579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 19:45:29.113589 | orchestrator | 2025-05-19 19:45:29.113598 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-19 19:45:29.113608 | orchestrator | Monday 19 May 2025 19:38:22 +0000 (0:00:02.789) 0:00:41.027 ************ 2025-05-19 19:45:29.113617 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 19:45:29.113627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 19:45:29.113637 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 19:45:29.113652 | orchestrator | 2025-05-19 19:45:29.113662 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-19 19:45:29.113672 | orchestrator | Monday 19 May 2025 19:38:26 +0000 (0:00:04.372) 0:00:45.400 ************ 2025-05-19 19:45:29.113714 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.113725 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.113735 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.113745 | orchestrator | 2025-05-19 19:45:29.113754 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-19 19:45:29.113764 | orchestrator | Monday 19 May 2025 19:38:28 +0000 (0:00:02.483) 0:00:47.884 ************ 2025-05-19 19:45:29.113774 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 19:45:29.113834 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 19:45:29.113846 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 19:45:29.113957 | orchestrator | 2025-05-19 19:45:29.113971 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-19 19:45:29.113981 | orchestrator | Monday 19 May 2025 19:38:32 +0000 (0:00:03.132) 0:00:51.017 ************ 2025-05-19 19:45:29.113991 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 19:45:29.114000 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 19:45:29.114010 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 19:45:29.114067 | orchestrator | 2025-05-19 19:45:29.114087 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-19 19:45:29.114097 | orchestrator | Monday 19 May 2025 19:38:34 +0000 (0:00:02.150) 0:00:53.167 ************ 2025-05-19 19:45:29.114107 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-19 19:45:29.114117 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-19 19:45:29.114127 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-19 19:45:29.114136 | orchestrator | 2025-05-19 19:45:29.114145 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-19 19:45:29.114155 | orchestrator | Monday 19 May 2025 19:38:36 +0000 (0:00:02.371) 0:00:55.539 ************ 2025-05-19 19:45:29.114164 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-19 19:45:29.114174 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-19 19:45:29.114183 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-19 19:45:29.114193 | orchestrator | 2025-05-19 19:45:29.114202 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-19 19:45:29.114212 | orchestrator | Monday 19 May 2025 19:38:39 +0000 (0:00:02.570) 0:00:58.110 ************ 2025-05-19 19:45:29.114221 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.114231 | orchestrator | 2025-05-19 19:45:29.114240 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-19 19:45:29.114275 | orchestrator | Monday 19 May 2025 19:38:40 +0000 (0:00:01.036) 0:00:59.146 ************ 2025-05-19 19:45:29.114285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.114313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.114324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.114367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.114383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.114394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.114404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.114455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.114467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.114477 | orchestrator | 2025-05-19 19:45:29.114508 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-19 19:45:29.114519 | orchestrator | Monday 19 May 2025 19:38:43 +0000 (0:00:03.342) 0:01:02.489 ************ 2025-05-19 19:45:29.114529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.114539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.114554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.114653 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.114664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.114681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.114698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.114708 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.114718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.114728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.114743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.114753 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.114762 | orchestrator | 2025-05-19 19:45:29.114772 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-19 19:45:29.114782 | orchestrator | Monday 19 May 2025 19:38:44 +0000 (0:00:00.928) 0:01:03.418 ************ 2025-05-19 19:45:29.114792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.114808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.114823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.114833 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.114843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.114853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.114863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.114873 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.114887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 19:45:29.114903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 19:45:29.114913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 19:45:29.114949 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.114959 | orchestrator | 2025-05-19 19:45:29.114969 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-19 19:45:29.114984 | orchestrator | Monday 19 May 2025 19:38:46 +0000 (0:00:01.626) 0:01:05.044 ************ 2025-05-19 19:45:29.114994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 19:45:29.115032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 19:45:29.115042 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 19:45:29.115052 | orchestrator | 2025-05-19 19:45:29.115061 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-19 19:45:29.115071 | orchestrator | Monday 19 May 2025 19:38:47 +0000 (0:00:01.901) 0:01:06.946 ************ 2025-05-19 19:45:29.115080 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 19:45:29.115090 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 19:45:29.115100 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 19:45:29.115109 | orchestrator | 2025-05-19 19:45:29.115119 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-19 19:45:29.115128 | orchestrator | Monday 19 May 2025 19:38:49 +0000 (0:00:01.918) 0:01:08.864 ************ 2025-05-19 19:45:29.115138 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 19:45:29.115148 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 19:45:29.115157 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 19:45:29.115166 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 19:45:29.115176 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.115185 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 19:45:29.115195 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.115205 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 19:45:29.115214 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.115224 | orchestrator | 2025-05-19 19:45:29.115233 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-19 19:45:29.115254 | orchestrator | Monday 19 May 2025 19:38:51 +0000 (0:00:01.915) 0:01:10.780 ************ 2025-05-19 19:45:29.115269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.115280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.115290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 19:45:29.115306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.115317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.115327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 19:45:29.115343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.115378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.115389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.115405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.115415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 19:45:29.115425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306', '__omit_place_holder__7fd85e3f8f0dc0deb3901738dd7c1ec0341f5306'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 19:45:29.115525 | orchestrator | 2025-05-19 19:45:29.115625 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-19 19:45:29.115648 | orchestrator | Monday 19 May 2025 19:38:55 +0000 (0:00:03.604) 0:01:14.384 ************ 2025-05-19 19:45:29.115658 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.115668 | orchestrator | 2025-05-19 19:45:29.115696 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-19 19:45:29.115706 | orchestrator | Monday 19 May 2025 19:38:56 +0000 (0:00:00.940) 0:01:15.325 ************ 2025-05-19 19:45:29.115728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 19:45:29.115741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.115761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 19:45:29.115778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.115789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.115805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.115820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.115882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.115894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 19:45:29.115913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.115987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116043 | orchestrator | 2025-05-19 19:45:29.116053 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-19 19:45:29.116063 | orchestrator | Monday 19 May 2025 19:39:00 +0000 (0:00:03.930) 0:01:19.255 ************ 2025-05-19 19:45:29.116079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 19:45:29.116090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.116100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116126 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.116136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 19:45:29.116153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.116169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116315 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.116326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 19:45:29.116344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.116354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116381 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.116390 | orchestrator | 2025-05-19 19:45:29.116400 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-19 19:45:29.116410 | orchestrator | Monday 19 May 2025 19:39:01 +0000 (0:00:00.855) 0:01:20.110 ************ 2025-05-19 19:45:29.116421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 19:45:29.116433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 19:45:29.116444 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.116454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 19:45:29.116468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 19:45:29.116479 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.116489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 19:45:29.116499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 19:45:29.116509 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.116530 | orchestrator | 2025-05-19 19:45:29.116540 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-19 19:45:29.116559 | orchestrator | Monday 19 May 2025 19:39:02 +0000 (0:00:01.096) 0:01:21.207 ************ 2025-05-19 19:45:29.116569 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.116578 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.116588 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.116597 | orchestrator | 2025-05-19 19:45:29.116607 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-19 19:45:29.116616 | orchestrator | Monday 19 May 2025 19:39:03 +0000 (0:00:01.372) 0:01:22.579 ************ 2025-05-19 19:45:29.116626 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.116635 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.116645 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.116659 | orchestrator | 2025-05-19 19:45:29.116674 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-19 19:45:29.116690 | orchestrator | Monday 19 May 2025 19:39:06 +0000 (0:00:02.481) 0:01:25.061 ************ 2025-05-19 19:45:29.116707 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.116723 | orchestrator | 2025-05-19 19:45:29.116742 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-19 19:45:29.116752 | orchestrator | Monday 19 May 2025 19:39:07 +0000 (0:00:01.004) 0:01:26.065 ************ 2025-05-19 19:45:29.116855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.116870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.116909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.116985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.116996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117022 | orchestrator | 2025-05-19 19:45:29.117031 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-19 19:45:29.117143 | orchestrator | Monday 19 May 2025 19:39:12 +0000 (0:00:04.986) 0:01:31.051 ************ 2025-05-19 19:45:29.117157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.117183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117204 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.117215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.117231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117261 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.117277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.117288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.117316 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.117332 | orchestrator | 2025-05-19 19:45:29.117348 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-19 19:45:29.117361 | orchestrator | Monday 19 May 2025 19:39:13 +0000 (0:00:01.269) 0:01:32.320 ************ 2025-05-19 19:45:29.117372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 19:45:29.117387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 19:45:29.117398 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.117408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 19:45:29.117418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 19:45:29.117434 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.117444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 19:45:29.117454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 19:45:29.117463 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.117473 | orchestrator | 2025-05-19 19:45:29.117482 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-19 19:45:29.117492 | orchestrator | Monday 19 May 2025 19:39:14 +0000 (0:00:01.387) 0:01:33.707 ************ 2025-05-19 19:45:29.117501 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.117511 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.117520 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.117530 | orchestrator | 2025-05-19 19:45:29.117539 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-19 19:45:29.117549 | orchestrator | Monday 19 May 2025 19:39:16 +0000 (0:00:01.627) 0:01:35.335 ************ 2025-05-19 19:45:29.117558 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.117567 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.117577 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.117586 | orchestrator | 2025-05-19 19:45:29.117595 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-19 19:45:29.117605 | orchestrator | Monday 19 May 2025 19:39:18 +0000 (0:00:02.555) 0:01:37.890 ************ 2025-05-19 19:45:29.117614 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.117624 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.117633 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.117642 | orchestrator | 2025-05-19 19:45:29.117670 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-19 19:45:29.117681 | orchestrator | Monday 19 May 2025 19:39:19 +0000 (0:00:00.305) 0:01:38.195 ************ 2025-05-19 19:45:29.117802 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.117813 | orchestrator | 2025-05-19 19:45:29.117823 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-19 19:45:29.117832 | orchestrator | Monday 19 May 2025 19:39:20 +0000 (0:00:01.120) 0:01:39.315 ************ 2025-05-19 19:45:29.117843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 19:45:29.117854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 19:45:29.117879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 19:45:29.117889 | orchestrator | 2025-05-19 19:45:29.117899 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-19 19:45:29.117909 | orchestrator | Monday 19 May 2025 19:39:24 +0000 (0:00:04.592) 0:01:43.908 ************ 2025-05-19 19:45:29.117943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 19:45:29.117962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 19:45:29.117974 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.117990 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.118003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 19:45:29.118068 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.118081 | orchestrator | 2025-05-19 19:45:29.118091 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-19 19:45:29.118101 | orchestrator | Monday 19 May 2025 19:39:26 +0000 (0:00:01.914) 0:01:45.823 ************ 2025-05-19 19:45:29.118111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 19:45:29.118128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 19:45:29.118139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.118149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 19:45:29.118159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 19:45:29.118169 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.118179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 19:45:29.118195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 19:45:29.118205 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.118214 | orchestrator | 2025-05-19 19:45:29.118224 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-19 19:45:29.118234 | orchestrator | Monday 19 May 2025 19:39:29 +0000 (0:00:02.420) 0:01:48.244 ************ 2025-05-19 19:45:29.118243 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.118252 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.118262 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.118271 | orchestrator | 2025-05-19 19:45:29.118280 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-19 19:45:29.118290 | orchestrator | Monday 19 May 2025 19:39:30 +0000 (0:00:00.936) 0:01:49.180 ************ 2025-05-19 19:45:29.118299 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.118308 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.118318 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.118333 | orchestrator | 2025-05-19 19:45:29.118343 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-19 19:45:29.118353 | orchestrator | Monday 19 May 2025 19:39:31 +0000 (0:00:01.332) 0:01:50.513 ************ 2025-05-19 19:45:29.118362 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.118372 | orchestrator | 2025-05-19 19:45:29.118381 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-19 19:45:29.118390 | orchestrator | Monday 19 May 2025 19:39:32 +0000 (0:00:00.992) 0:01:51.505 ************ 2025-05-19 19:45:29.118400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.118417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.118428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.118545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118682 | orchestrator | 2025-05-19 19:45:29.118692 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-19 19:45:29.118702 | orchestrator | Monday 19 May 2025 19:39:37 +0000 (0:00:04.681) 0:01:56.187 ************ 2025-05-19 19:45:29.118712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.118722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118768 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.118783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.118793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118841 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.118851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.118861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.118911 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.118950 | orchestrator | 2025-05-19 19:45:29.118960 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-19 19:45:29.118977 | orchestrator | Monday 19 May 2025 19:39:38 +0000 (0:00:01.086) 0:01:57.273 ************ 2025-05-19 19:45:29.118987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 19:45:29.119054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 19:45:29.119066 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.119075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 19:45:29.119085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 19:45:29.119095 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.119105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 19:45:29.119115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 19:45:29.119160 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.119197 | orchestrator | 2025-05-19 19:45:29.119208 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-19 19:45:29.119242 | orchestrator | Monday 19 May 2025 19:39:39 +0000 (0:00:01.074) 0:01:58.348 ************ 2025-05-19 19:45:29.119252 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.119261 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.119316 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.119327 | orchestrator | 2025-05-19 19:45:29.119337 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-19 19:45:29.119346 | orchestrator | Monday 19 May 2025 19:39:41 +0000 (0:00:01.722) 0:02:00.070 ************ 2025-05-19 19:45:29.119356 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.119365 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.119375 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.119385 | orchestrator | 2025-05-19 19:45:29.119394 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-19 19:45:29.119404 | orchestrator | Monday 19 May 2025 19:39:43 +0000 (0:00:02.629) 0:02:02.700 ************ 2025-05-19 19:45:29.119413 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.119423 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.119433 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.119442 | orchestrator | 2025-05-19 19:45:29.119452 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-19 19:45:29.119462 | orchestrator | Monday 19 May 2025 19:39:44 +0000 (0:00:00.329) 0:02:03.030 ************ 2025-05-19 19:45:29.119477 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.119487 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.119496 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.119506 | orchestrator | 2025-05-19 19:45:29.119515 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-19 19:45:29.119525 | orchestrator | Monday 19 May 2025 19:39:44 +0000 (0:00:00.502) 0:02:03.532 ************ 2025-05-19 19:45:29.119535 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.119544 | orchestrator | 2025-05-19 19:45:29.119554 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-19 19:45:29.119596 | orchestrator | Monday 19 May 2025 19:39:46 +0000 (0:00:01.888) 0:02:05.420 ************ 2025-05-19 19:45:29.119607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:45:29.119657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:45:29.119668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.119679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.119689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.119777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.119795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.119806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:45:29.119822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:45:29.119897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.119909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:45:29.120074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:45:29.120083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120145 | orchestrator | 2025-05-19 19:45:29.120158 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-19 19:45:29.120167 | orchestrator | Monday 19 May 2025 19:39:52 +0000 (0:00:06.472) 0:02:11.892 ************ 2025-05-19 19:45:29.120175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:45:29.120184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:45:29.120197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120278 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.120345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:45:29.120360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:45:29.120376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120423 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.120432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:45:29.120453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:45:29.120462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.120514 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.120522 | orchestrator | 2025-05-19 19:45:29.120530 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-19 19:45:29.120538 | orchestrator | Monday 19 May 2025 19:39:54 +0000 (0:00:01.848) 0:02:13.741 ************ 2025-05-19 19:45:29.120546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 19:45:29.120559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 19:45:29.120568 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.120576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 19:45:29.120584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 19:45:29.120592 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.120600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 19:45:29.120608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 19:45:29.120616 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.120624 | orchestrator | 2025-05-19 19:45:29.120631 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-19 19:45:29.120639 | orchestrator | Monday 19 May 2025 19:39:56 +0000 (0:00:01.660) 0:02:15.401 ************ 2025-05-19 19:45:29.120647 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.120655 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.120663 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.120670 | orchestrator | 2025-05-19 19:45:29.120678 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-19 19:45:29.120686 | orchestrator | Monday 19 May 2025 19:39:57 +0000 (0:00:01.132) 0:02:16.533 ************ 2025-05-19 19:45:29.120694 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.120701 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.120709 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.120717 | orchestrator | 2025-05-19 19:45:29.120725 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-19 19:45:29.120732 | orchestrator | Monday 19 May 2025 19:39:59 +0000 (0:00:02.268) 0:02:18.802 ************ 2025-05-19 19:45:29.120740 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.120748 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.120756 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.120764 | orchestrator | 2025-05-19 19:45:29.120771 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-19 19:45:29.120784 | orchestrator | Monday 19 May 2025 19:40:00 +0000 (0:00:00.685) 0:02:19.487 ************ 2025-05-19 19:45:29.120792 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.120799 | orchestrator | 2025-05-19 19:45:29.120807 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-19 19:45:29.120820 | orchestrator | Monday 19 May 2025 19:40:01 +0000 (0:00:01.110) 0:02:20.597 ************ 2025-05-19 19:45:29.120829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:45:29.120843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.120859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:45:29.120878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.120893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:45:29.120912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.120946 | orchestrator | 2025-05-19 19:45:29.120958 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-19 19:45:29.120970 | orchestrator | Monday 19 May 2025 19:40:06 +0000 (0:00:05.111) 0:02:25.709 ************ 2025-05-19 19:45:29.120991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:45:29.121018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.121028 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.121043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:45:29.121063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.121072 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.121081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:45:29.121101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.121110 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.121118 | orchestrator | 2025-05-19 19:45:29.121126 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-19 19:45:29.121134 | orchestrator | Monday 19 May 2025 19:40:10 +0000 (0:00:04.052) 0:02:29.762 ************ 2025-05-19 19:45:29.121149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 19:45:29.121158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 19:45:29.121167 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.121175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 19:45:29.121193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 19:45:29.121202 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.121210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 19:45:29.121218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 19:45:29.121226 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.121234 | orchestrator | 2025-05-19 19:45:29.121242 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-19 19:45:29.121250 | orchestrator | Monday 19 May 2025 19:40:15 +0000 (0:00:04.977) 0:02:34.740 ************ 2025-05-19 19:45:29.121257 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.121265 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.121273 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.121281 | orchestrator | 2025-05-19 19:45:29.121289 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-19 19:45:29.121296 | orchestrator | Monday 19 May 2025 19:40:16 +0000 (0:00:01.166) 0:02:35.906 ************ 2025-05-19 19:45:29.121304 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.121312 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.121319 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.121327 | orchestrator | 2025-05-19 19:45:29.121335 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-19 19:45:29.121342 | orchestrator | Monday 19 May 2025 19:40:18 +0000 (0:00:01.811) 0:02:37.717 ************ 2025-05-19 19:45:29.121350 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.121358 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.121366 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.121374 | orchestrator | 2025-05-19 19:45:29.121382 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-19 19:45:29.121390 | orchestrator | Monday 19 May 2025 19:40:19 +0000 (0:00:00.380) 0:02:38.098 ************ 2025-05-19 19:45:29.121401 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.121409 | orchestrator | 2025-05-19 19:45:29.121417 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-19 19:45:29.121425 | orchestrator | Monday 19 May 2025 19:40:20 +0000 (0:00:01.032) 0:02:39.130 ************ 2025-05-19 19:45:29.121433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 19:45:29.121447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 19:45:29.121461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 19:45:29.121470 | orchestrator | 2025-05-19 19:45:29.121478 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-19 19:45:29.121486 | orchestrator | Monday 19 May 2025 19:40:23 +0000 (0:00:03.569) 0:02:42.700 ************ 2025-05-19 19:45:29.121494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 19:45:29.121502 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.121510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 19:45:29.121518 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.121530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 19:45:29.121544 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.121552 | orchestrator | 2025-05-19 19:45:29.121559 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-19 19:45:29.121567 | orchestrator | Monday 19 May 2025 19:40:24 +0000 (0:00:00.657) 0:02:43.358 ************ 2025-05-19 19:45:29.121575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 19:45:29.121583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 19:45:29.121591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 19:45:29.121599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 19:45:29.121607 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.121615 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.121623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 19:45:29.121635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 19:45:29.121643 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.121651 | orchestrator | 2025-05-19 19:45:29.121659 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-19 19:45:29.121667 | orchestrator | Monday 19 May 2025 19:40:25 +0000 (0:00:00.838) 0:02:44.197 ************ 2025-05-19 19:45:29.121674 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.121682 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.121690 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.121698 | orchestrator | 2025-05-19 19:45:29.121705 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-19 19:45:29.121713 | orchestrator | Monday 19 May 2025 19:40:26 +0000 (0:00:01.148) 0:02:45.346 ************ 2025-05-19 19:45:29.121721 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.121729 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.121736 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.121744 | orchestrator | 2025-05-19 19:45:29.121752 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-19 19:45:29.121760 | orchestrator | Monday 19 May 2025 19:40:28 +0000 (0:00:02.065) 0:02:47.411 ************ 2025-05-19 19:45:29.121768 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.121775 | orchestrator | 2025-05-19 19:45:29.121784 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-19 19:45:29.121791 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:01.420) 0:02:48.832 ************ 2025-05-19 19:45:29.121799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.121817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.121826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.121839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.121848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.121861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.121874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.121882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.121890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.121899 | orchestrator | 2025-05-19 19:45:29.121910 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-19 19:45:29.121936 | orchestrator | Monday 19 May 2025 19:40:37 +0000 (0:00:07.795) 0:02:56.628 ************ 2025-05-19 19:45:29.121945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.121960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.121971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.121980 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.121988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.122003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.122049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.122064 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.122085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.122105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.122118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.122130 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.122142 | orchestrator | 2025-05-19 19:45:29.122155 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-19 19:45:29.122167 | orchestrator | Monday 19 May 2025 19:40:38 +0000 (0:00:01.142) 0:02:57.770 ************ 2025-05-19 19:45:29.122179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122246 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.122260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122310 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.122318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-19 19:45:29.122349 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.122357 | orchestrator | 2025-05-19 19:45:29.122365 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-19 19:45:29.122381 | orchestrator | Monday 19 May 2025 19:40:40 +0000 (0:00:01.351) 0:02:59.122 ************ 2025-05-19 19:45:29.122389 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.122397 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.122405 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.122412 | orchestrator | 2025-05-19 19:45:29.122420 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-19 19:45:29.122428 | orchestrator | Monday 19 May 2025 19:40:41 +0000 (0:00:01.405) 0:03:00.527 ************ 2025-05-19 19:45:29.122436 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.122443 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.122451 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.122459 | orchestrator | 2025-05-19 19:45:29.122467 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-19 19:45:29.122480 | orchestrator | Monday 19 May 2025 19:40:43 +0000 (0:00:02.247) 0:03:02.774 ************ 2025-05-19 19:45:29.122493 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.122505 | orchestrator | 2025-05-19 19:45:29.122517 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-19 19:45:29.122530 | orchestrator | Monday 19 May 2025 19:40:44 +0000 (0:00:01.090) 0:03:03.865 ************ 2025-05-19 19:45:29.122556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:45:29.122588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:45:29.122606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:45:29.122621 | orchestrator | 2025-05-19 19:45:29.122629 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-19 19:45:29.122637 | orchestrator | Monday 19 May 2025 19:40:49 +0000 (0:00:04.971) 0:03:08.836 ************ 2025-05-19 19:45:29.122650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:45:29.122665 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.122681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:45:29.122691 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.122717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:45:29.122732 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.122740 | orchestrator | 2025-05-19 19:45:29.122768 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-19 19:45:29.122783 | orchestrator | Monday 19 May 2025 19:40:51 +0000 (0:00:01.360) 0:03:10.197 ************ 2025-05-19 19:45:29.122797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 19:45:29.122814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 19:45:29.122830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 19:45:29.122845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 19:45:29.122861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 19:45:29.122875 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.122890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 19:45:29.122903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 19:45:29.122912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 19:45:29.122975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 19:45:29.122991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 19:45:29.122999 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.123007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 19:45:29.123015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 19:45:29.123046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 19:45:29.123062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 19:45:29.123076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 19:45:29.123091 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.123104 | orchestrator | 2025-05-19 19:45:29.123118 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-19 19:45:29.123126 | orchestrator | Monday 19 May 2025 19:40:53 +0000 (0:00:01.830) 0:03:12.027 ************ 2025-05-19 19:45:29.123134 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.123142 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.123151 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.123203 | orchestrator | 2025-05-19 19:45:29.123214 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-19 19:45:29.123225 | orchestrator | Monday 19 May 2025 19:40:54 +0000 (0:00:01.665) 0:03:13.693 ************ 2025-05-19 19:45:29.123238 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.123245 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.123251 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.123258 | orchestrator | 2025-05-19 19:45:29.123264 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-19 19:45:29.123271 | orchestrator | Monday 19 May 2025 19:40:57 +0000 (0:00:02.331) 0:03:16.024 ************ 2025-05-19 19:45:29.123277 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.123284 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.123291 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.123297 | orchestrator | 2025-05-19 19:45:29.123304 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-19 19:45:29.123310 | orchestrator | Monday 19 May 2025 19:40:57 +0000 (0:00:00.644) 0:03:16.669 ************ 2025-05-19 19:45:29.123317 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.123323 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.123330 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.123336 | orchestrator | 2025-05-19 19:45:29.123343 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-19 19:45:29.123349 | orchestrator | Monday 19 May 2025 19:40:58 +0000 (0:00:00.532) 0:03:17.201 ************ 2025-05-19 19:45:29.123356 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.123370 | orchestrator | 2025-05-19 19:45:29.123376 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-19 19:45:29.123383 | orchestrator | Monday 19 May 2025 19:40:59 +0000 (0:00:01.481) 0:03:18.683 ************ 2025-05-19 19:45:29.123395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:45:29.123404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:45:29.123427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:45:29.123435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:45:29.123442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:45:29.123460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:45:29.123468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:45:29.123487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:45:29.123494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:45:29.123501 | orchestrator | 2025-05-19 19:45:29.123508 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-19 19:45:29.123515 | orchestrator | Monday 19 May 2025 19:41:04 +0000 (0:00:04.861) 0:03:23.544 ************ 2025-05-19 19:45:29.123522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:45:29.123540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:45:29.123547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:45:29.123554 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.123574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:45:29.123582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:45:29.123589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:45:29.123600 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.123611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:45:29.123619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:45:29.123626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:45:29.123633 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.123640 | orchestrator | 2025-05-19 19:45:29.123646 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-19 19:45:29.123653 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:00.858) 0:03:24.403 ************ 2025-05-19 19:45:29.123672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-19 19:45:29.123680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-19 19:45:29.123687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-19 19:45:29.123694 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.123701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-19 19:45:29.123712 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.123719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-19 19:45:29.123726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-19 19:45:29.123733 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.123740 | orchestrator | 2025-05-19 19:45:29.123746 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-19 19:45:29.123753 | orchestrator | Monday 19 May 2025 19:41:06 +0000 (0:00:01.518) 0:03:25.921 ************ 2025-05-19 19:45:29.123760 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.123766 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.123773 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.123779 | orchestrator | 2025-05-19 19:45:29.123800 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-19 19:45:29.123807 | orchestrator | Monday 19 May 2025 19:41:08 +0000 (0:00:01.389) 0:03:27.311 ************ 2025-05-19 19:45:29.123814 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.123824 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.123830 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.123837 | orchestrator | 2025-05-19 19:45:29.123843 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-19 19:45:29.123850 | orchestrator | Monday 19 May 2025 19:41:10 +0000 (0:00:02.403) 0:03:29.714 ************ 2025-05-19 19:45:29.123857 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.123863 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.123870 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.123884 | orchestrator | 2025-05-19 19:45:29.123891 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-19 19:45:29.123897 | orchestrator | Monday 19 May 2025 19:41:11 +0000 (0:00:00.326) 0:03:30.040 ************ 2025-05-19 19:45:29.123904 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.123910 | orchestrator | 2025-05-19 19:45:29.123938 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-19 19:45:29.123948 | orchestrator | Monday 19 May 2025 19:41:12 +0000 (0:00:01.513) 0:03:31.554 ************ 2025-05-19 19:45:29.123956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:45:29.123979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.123993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:45:29.124000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:45:29.124018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124025 | orchestrator | 2025-05-19 19:45:29.124032 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-19 19:45:29.124044 | orchestrator | Monday 19 May 2025 19:41:17 +0000 (0:00:04.997) 0:03:36.551 ************ 2025-05-19 19:45:29.124057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:45:29.124064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124071 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.124081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:45:29.124089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124096 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.124107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:45:29.124121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124128 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.124134 | orchestrator | 2025-05-19 19:45:29.124141 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-19 19:45:29.124148 | orchestrator | Monday 19 May 2025 19:41:18 +0000 (0:00:01.044) 0:03:37.595 ************ 2025-05-19 19:45:29.124155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 19:45:29.124163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 19:45:29.124170 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.124176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 19:45:29.124183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 19:45:29.124190 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.124197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 19:45:29.124206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 19:45:29.124213 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.124220 | orchestrator | 2025-05-19 19:45:29.124227 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-19 19:45:29.124233 | orchestrator | Monday 19 May 2025 19:41:20 +0000 (0:00:01.571) 0:03:39.166 ************ 2025-05-19 19:45:29.124240 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.124247 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.124253 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.124260 | orchestrator | 2025-05-19 19:45:29.124266 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-19 19:45:29.124273 | orchestrator | Monday 19 May 2025 19:41:21 +0000 (0:00:01.428) 0:03:40.595 ************ 2025-05-19 19:45:29.124279 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.124286 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.124293 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.124304 | orchestrator | 2025-05-19 19:45:29.124310 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-19 19:45:29.124317 | orchestrator | Monday 19 May 2025 19:41:24 +0000 (0:00:02.374) 0:03:42.970 ************ 2025-05-19 19:45:29.124324 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.124330 | orchestrator | 2025-05-19 19:45:29.124337 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-19 19:45:29.124343 | orchestrator | Monday 19 May 2025 19:41:25 +0000 (0:00:01.219) 0:03:44.189 ************ 2025-05-19 19:45:29.124362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 19:45:29.124371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 19:45:29.124422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 19:45:29.124440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124517 | orchestrator | 2025-05-19 19:45:29.124524 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-19 19:45:29.124530 | orchestrator | Monday 19 May 2025 19:41:29 +0000 (0:00:04.600) 0:03:48.790 ************ 2025-05-19 19:45:29.124551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 19:45:29.124571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124601 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.124608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 19:45:29.124615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124648 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.124655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 19:45:29.124665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.124691 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.124697 | orchestrator | 2025-05-19 19:45:29.124704 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-19 19:45:29.124711 | orchestrator | Monday 19 May 2025 19:41:30 +0000 (0:00:01.044) 0:03:49.834 ************ 2025-05-19 19:45:29.124718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 19:45:29.124737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 19:45:29.124744 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.124760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 19:45:29.124767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 19:45:29.124774 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.124781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 19:45:29.124788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 19:45:29.124795 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.124802 | orchestrator | 2025-05-19 19:45:29.124808 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-19 19:45:29.124815 | orchestrator | Monday 19 May 2025 19:41:32 +0000 (0:00:01.288) 0:03:51.123 ************ 2025-05-19 19:45:29.124822 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.124828 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.124835 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.124847 | orchestrator | 2025-05-19 19:45:29.124853 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-19 19:45:29.124860 | orchestrator | Monday 19 May 2025 19:41:33 +0000 (0:00:01.475) 0:03:52.599 ************ 2025-05-19 19:45:29.124867 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.124874 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.124880 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.124887 | orchestrator | 2025-05-19 19:45:29.124894 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-19 19:45:29.124900 | orchestrator | Monday 19 May 2025 19:41:35 +0000 (0:00:02.322) 0:03:54.921 ************ 2025-05-19 19:45:29.124907 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.124914 | orchestrator | 2025-05-19 19:45:29.124939 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-19 19:45:29.124946 | orchestrator | Monday 19 May 2025 19:41:37 +0000 (0:00:01.515) 0:03:56.436 ************ 2025-05-19 19:45:29.124953 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:45:29.124960 | orchestrator | 2025-05-19 19:45:29.124975 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-19 19:45:29.124986 | orchestrator | Monday 19 May 2025 19:41:40 +0000 (0:00:03.400) 0:03:59.836 ************ 2025-05-19 19:45:29.124994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 19:45:29.125015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 19:45:29.125022 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 19:45:29.125046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 19:45:29.125053 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 19:45:29.125078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 19:45:29.125085 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125091 | orchestrator | 2025-05-19 19:45:29.125098 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-19 19:45:29.125104 | orchestrator | Monday 19 May 2025 19:41:43 +0000 (0:00:03.071) 0:04:02.908 ************ 2025-05-19 19:45:29.125115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 19:45:29.125141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 19:45:29.125149 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 19:45:29.125172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 19:45:29.125179 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 19:45:29.125205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 19:45:29.125212 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125219 | orchestrator | 2025-05-19 19:45:29.125226 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-19 19:45:29.125232 | orchestrator | Monday 19 May 2025 19:41:47 +0000 (0:00:03.107) 0:04:06.015 ************ 2025-05-19 19:45:29.125239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 19:45:29.125249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 19:45:29.125256 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 19:45:29.125270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 19:45:29.125277 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 19:45:29.125304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 19:45:29.125311 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125317 | orchestrator | 2025-05-19 19:45:29.125324 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-19 19:45:29.125330 | orchestrator | Monday 19 May 2025 19:41:51 +0000 (0:00:04.015) 0:04:10.031 ************ 2025-05-19 19:45:29.125337 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.125343 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.125350 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.125357 | orchestrator | 2025-05-19 19:45:29.125363 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-19 19:45:29.125370 | orchestrator | Monday 19 May 2025 19:41:53 +0000 (0:00:02.381) 0:04:12.412 ************ 2025-05-19 19:45:29.125376 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125383 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125389 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125396 | orchestrator | 2025-05-19 19:45:29.125402 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-19 19:45:29.125409 | orchestrator | Monday 19 May 2025 19:41:55 +0000 (0:00:01.958) 0:04:14.371 ************ 2025-05-19 19:45:29.125415 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125422 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125440 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125447 | orchestrator | 2025-05-19 19:45:29.125453 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-19 19:45:29.125460 | orchestrator | Monday 19 May 2025 19:41:56 +0000 (0:00:00.597) 0:04:14.969 ************ 2025-05-19 19:45:29.125466 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.125473 | orchestrator | 2025-05-19 19:45:29.125479 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-19 19:45:29.125486 | orchestrator | Monday 19 May 2025 19:41:57 +0000 (0:00:01.514) 0:04:16.483 ************ 2025-05-19 19:45:29.125497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 19:45:29.125504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 19:45:29.125530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 19:45:29.125537 | orchestrator | 2025-05-19 19:45:29.125552 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-19 19:45:29.125559 | orchestrator | Monday 19 May 2025 19:41:59 +0000 (0:00:01.955) 0:04:18.438 ************ 2025-05-19 19:45:29.125566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 19:45:29.125573 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 19:45:29.125587 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 19:45:29.125605 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125611 | orchestrator | 2025-05-19 19:45:29.125622 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-19 19:45:29.125629 | orchestrator | Monday 19 May 2025 19:41:59 +0000 (0:00:00.402) 0:04:18.841 ************ 2025-05-19 19:45:29.125636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 19:45:29.125643 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 19:45:29.125657 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 19:45:29.125670 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125677 | orchestrator | 2025-05-19 19:45:29.125695 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-19 19:45:29.125703 | orchestrator | Monday 19 May 2025 19:42:01 +0000 (0:00:01.174) 0:04:20.015 ************ 2025-05-19 19:45:29.125709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125716 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125722 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125729 | orchestrator | 2025-05-19 19:45:29.125735 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-19 19:45:29.125742 | orchestrator | Monday 19 May 2025 19:42:02 +0000 (0:00:01.002) 0:04:21.018 ************ 2025-05-19 19:45:29.125748 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125755 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125762 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125768 | orchestrator | 2025-05-19 19:45:29.125775 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-19 19:45:29.125781 | orchestrator | Monday 19 May 2025 19:42:03 +0000 (0:00:01.721) 0:04:22.739 ************ 2025-05-19 19:45:29.125788 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.125794 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.125801 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.125807 | orchestrator | 2025-05-19 19:45:29.125814 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-19 19:45:29.125820 | orchestrator | Monday 19 May 2025 19:42:04 +0000 (0:00:00.308) 0:04:23.048 ************ 2025-05-19 19:45:29.125827 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.125833 | orchestrator | 2025-05-19 19:45:29.125840 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-19 19:45:29.125846 | orchestrator | Monday 19 May 2025 19:42:05 +0000 (0:00:01.694) 0:04:24.742 ************ 2025-05-19 19:45:29.125862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:45:29.125878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.125885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.125905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.125913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:45:29.125937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.125946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.125978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.125986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.125997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:45:29.126004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.126067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.126128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:45:29.126136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sri2025-05-19 19:45:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:29.126164 | orchestrator | 2025-05-19 19:45:29 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:29.126171 | orchestrator | 2025-05-19 19:45:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:29.126178 | orchestrator | ov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.126265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:45:29.126306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.126314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:45:29.126368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.126465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.126491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126504 | orchestrator | 2025-05-19 19:45:29.126523 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-19 19:45:29.126530 | orchestrator | Monday 19 May 2025 19:42:11 +0000 (0:00:05.881) 0:04:30.624 ************ 2025-05-19 19:45:29.126537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:45:29.126549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:45:29.126590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:45:29.126597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:45:29.126669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.126794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.126850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.126857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.126899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126913 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.126936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.126948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.126955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.126962 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.126985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:45:29.126993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:45:29.127047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.127073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.127081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.127107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.127127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:45:29.127145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:45:29.127160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:45:29.127171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127184 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.127191 | orchestrator | 2025-05-19 19:45:29.127198 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-19 19:45:29.127205 | orchestrator | Monday 19 May 2025 19:42:13 +0000 (0:00:02.235) 0:04:32.859 ************ 2025-05-19 19:45:29.127211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 19:45:29.127218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 19:45:29.127225 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.127231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 19:45:29.127238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 19:45:29.127245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 19:45:29.127252 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.127269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 19:45:29.127276 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.127282 | orchestrator | 2025-05-19 19:45:29.127289 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-19 19:45:29.127304 | orchestrator | Monday 19 May 2025 19:42:16 +0000 (0:00:02.154) 0:04:35.014 ************ 2025-05-19 19:45:29.127312 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.127318 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.127325 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.127331 | orchestrator | 2025-05-19 19:45:29.127338 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-19 19:45:29.127344 | orchestrator | Monday 19 May 2025 19:42:17 +0000 (0:00:01.606) 0:04:36.620 ************ 2025-05-19 19:45:29.127351 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.127357 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.127364 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.127370 | orchestrator | 2025-05-19 19:45:29.127377 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-19 19:45:29.127384 | orchestrator | Monday 19 May 2025 19:42:20 +0000 (0:00:02.623) 0:04:39.244 ************ 2025-05-19 19:45:29.127390 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.127396 | orchestrator | 2025-05-19 19:45:29.127403 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-19 19:45:29.127410 | orchestrator | Monday 19 May 2025 19:42:22 +0000 (0:00:01.720) 0:04:40.964 ************ 2025-05-19 19:45:29.127416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.127432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.127439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.127446 | orchestrator | 2025-05-19 19:45:29.127453 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-19 19:45:29.127472 | orchestrator | Monday 19 May 2025 19:42:26 +0000 (0:00:04.335) 0:04:45.300 ************ 2025-05-19 19:45:29.127479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.127486 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.127502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.127514 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.127526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.127533 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.127540 | orchestrator | 2025-05-19 19:45:29.127546 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-19 19:45:29.127553 | orchestrator | Monday 19 May 2025 19:42:27 +0000 (0:00:00.858) 0:04:46.159 ************ 2025-05-19 19:45:29.127560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 19:45:29.127567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 19:45:29.127574 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.127581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 19:45:29.127588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 19:45:29.127594 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.127613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 19:45:29.127621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 19:45:29.127627 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.127634 | orchestrator | 2025-05-19 19:45:29.127640 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-19 19:45:29.127647 | orchestrator | Monday 19 May 2025 19:42:28 +0000 (0:00:01.101) 0:04:47.260 ************ 2025-05-19 19:45:29.127654 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.127660 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.127667 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.127673 | orchestrator | 2025-05-19 19:45:29.127680 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-19 19:45:29.127686 | orchestrator | Monday 19 May 2025 19:42:29 +0000 (0:00:01.538) 0:04:48.798 ************ 2025-05-19 19:45:29.127693 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.127699 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.127710 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.127716 | orchestrator | 2025-05-19 19:45:29.127723 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-19 19:45:29.127730 | orchestrator | Monday 19 May 2025 19:42:32 +0000 (0:00:02.559) 0:04:51.358 ************ 2025-05-19 19:45:29.127736 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.127743 | orchestrator | 2025-05-19 19:45:29.127749 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-19 19:45:29.127756 | orchestrator | Monday 19 May 2025 19:42:34 +0000 (0:00:01.798) 0:04:53.156 ************ 2025-05-19 19:45:29.127767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.127775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.127816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.127841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127864 | orchestrator | 2025-05-19 19:45:29.127871 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-19 19:45:29.127878 | orchestrator | Monday 19 May 2025 19:42:40 +0000 (0:00:06.228) 0:04:59.385 ************ 2025-05-19 19:45:29.127885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.127896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.127910 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.127994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.128011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.128018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.128025 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.128042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.128049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.128070 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128077 | orchestrator | 2025-05-19 19:45:29.128083 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-19 19:45:29.128089 | orchestrator | Monday 19 May 2025 19:42:41 +0000 (0:00:00.961) 0:05:00.346 ************ 2025-05-19 19:45:29.128096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128122 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.128128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128153 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 19:45:29.128187 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128193 | orchestrator | 2025-05-19 19:45:29.128200 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-19 19:45:29.128206 | orchestrator | Monday 19 May 2025 19:42:42 +0000 (0:00:01.439) 0:05:01.785 ************ 2025-05-19 19:45:29.128212 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.128218 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.128224 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.128230 | orchestrator | 2025-05-19 19:45:29.128236 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-19 19:45:29.128246 | orchestrator | Monday 19 May 2025 19:42:44 +0000 (0:00:01.465) 0:05:03.250 ************ 2025-05-19 19:45:29.128252 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.128258 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.128264 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.128270 | orchestrator | 2025-05-19 19:45:29.128276 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-19 19:45:29.128282 | orchestrator | Monday 19 May 2025 19:42:47 +0000 (0:00:02.708) 0:05:05.959 ************ 2025-05-19 19:45:29.128289 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.128295 | orchestrator | 2025-05-19 19:45:29.128301 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-19 19:45:29.128307 | orchestrator | Monday 19 May 2025 19:42:48 +0000 (0:00:01.475) 0:05:07.435 ************ 2025-05-19 19:45:29.128313 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-19 19:45:29.128320 | orchestrator | 2025-05-19 19:45:29.128336 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-19 19:45:29.128342 | orchestrator | Monday 19 May 2025 19:42:50 +0000 (0:00:01.777) 0:05:09.212 ************ 2025-05-19 19:45:29.128349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 19:45:29.128356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 19:45:29.128362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 19:45:29.128369 | orchestrator | 2025-05-19 19:45:29.128375 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-19 19:45:29.128381 | orchestrator | Monday 19 May 2025 19:42:55 +0000 (0:00:05.360) 0:05:14.573 ************ 2025-05-19 19:45:29.128391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128397 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.128404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128415 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128428 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128434 | orchestrator | 2025-05-19 19:45:29.128440 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-19 19:45:29.128446 | orchestrator | Monday 19 May 2025 19:42:57 +0000 (0:00:01.532) 0:05:16.105 ************ 2025-05-19 19:45:29.128452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 19:45:29.128470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 19:45:29.128477 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.128483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 19:45:29.128490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 19:45:29.128496 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 19:45:29.128509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 19:45:29.128518 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128527 | orchestrator | 2025-05-19 19:45:29.128537 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 19:45:29.128545 | orchestrator | Monday 19 May 2025 19:42:59 +0000 (0:00:02.358) 0:05:18.463 ************ 2025-05-19 19:45:29.128554 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.128562 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.128571 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.128582 | orchestrator | 2025-05-19 19:45:29.128593 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 19:45:29.128603 | orchestrator | Monday 19 May 2025 19:43:02 +0000 (0:00:03.001) 0:05:21.465 ************ 2025-05-19 19:45:29.128613 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.128622 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.128630 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.128636 | orchestrator | 2025-05-19 19:45:29.128642 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-19 19:45:29.128654 | orchestrator | Monday 19 May 2025 19:43:06 +0000 (0:00:03.712) 0:05:25.178 ************ 2025-05-19 19:45:29.128660 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-19 19:45:29.128666 | orchestrator | 2025-05-19 19:45:29.128672 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-19 19:45:29.128684 | orchestrator | Monday 19 May 2025 19:43:07 +0000 (0:00:01.360) 0:05:26.538 ************ 2025-05-19 19:45:29.128691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128697 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.128704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128710 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128723 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128729 | orchestrator | 2025-05-19 19:45:29.128735 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-19 19:45:29.128754 | orchestrator | Monday 19 May 2025 19:43:09 +0000 (0:00:02.054) 0:05:28.593 ************ 2025-05-19 19:45:29.128761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128767 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.128774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128780 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 19:45:29.128797 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128803 | orchestrator | 2025-05-19 19:45:29.128810 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-19 19:45:29.128816 | orchestrator | Monday 19 May 2025 19:43:11 +0000 (0:00:02.004) 0:05:30.597 ************ 2025-05-19 19:45:29.128822 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.128828 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.128834 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.128840 | orchestrator | 2025-05-19 19:45:29.128846 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 19:45:29.128853 | orchestrator | Monday 19 May 2025 19:43:13 +0000 (0:00:01.896) 0:05:32.493 ************ 2025-05-19 19:45:29.128862 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.128868 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.128874 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.128880 | orchestrator | 2025-05-19 19:45:29.128887 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 19:45:29.128893 | orchestrator | Monday 19 May 2025 19:43:16 +0000 (0:00:02.985) 0:05:35.479 ************ 2025-05-19 19:45:29.128899 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.128905 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.128911 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.128935 | orchestrator | 2025-05-19 19:45:29.128942 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-19 19:45:29.128948 | orchestrator | Monday 19 May 2025 19:43:20 +0000 (0:00:03.666) 0:05:39.145 ************ 2025-05-19 19:45:29.128954 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-19 19:45:29.128960 | orchestrator | 2025-05-19 19:45:29.128966 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-19 19:45:29.128972 | orchestrator | Monday 19 May 2025 19:43:21 +0000 (0:00:01.403) 0:05:40.549 ************ 2025-05-19 19:45:29.128979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 19:45:29.128985 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.129005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 19:45:29.129012 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.129018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 19:45:29.129028 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.129035 | orchestrator | 2025-05-19 19:45:29.129041 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-19 19:45:29.129047 | orchestrator | Monday 19 May 2025 19:43:23 +0000 (0:00:01.677) 0:05:42.227 ************ 2025-05-19 19:45:29.129053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 19:45:29.129060 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.129066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 19:45:29.129072 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.129082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 19:45:29.129089 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.129095 | orchestrator | 2025-05-19 19:45:29.129101 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-19 19:45:29.129107 | orchestrator | Monday 19 May 2025 19:43:25 +0000 (0:00:02.120) 0:05:44.348 ************ 2025-05-19 19:45:29.129113 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.129119 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.129125 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.129131 | orchestrator | 2025-05-19 19:45:29.129137 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 19:45:29.129143 | orchestrator | Monday 19 May 2025 19:43:27 +0000 (0:00:02.021) 0:05:46.369 ************ 2025-05-19 19:45:29.129150 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.129156 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.129162 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.129168 | orchestrator | 2025-05-19 19:45:29.129174 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 19:45:29.129180 | orchestrator | Monday 19 May 2025 19:43:30 +0000 (0:00:03.229) 0:05:49.599 ************ 2025-05-19 19:45:29.129186 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.129192 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.129198 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.129204 | orchestrator | 2025-05-19 19:45:29.129210 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-19 19:45:29.129216 | orchestrator | Monday 19 May 2025 19:43:35 +0000 (0:00:04.865) 0:05:54.464 ************ 2025-05-19 19:45:29.129222 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.129233 | orchestrator | 2025-05-19 19:45:29.129239 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-19 19:45:29.129245 | orchestrator | Monday 19 May 2025 19:43:37 +0000 (0:00:01.777) 0:05:56.242 ************ 2025-05-19 19:45:29.129263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.129270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 19:45:29.129277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.129299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.129322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 19:45:29.129329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.129413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.129425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 19:45:29.129445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.129465 | orchestrator | 2025-05-19 19:45:29.129471 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-19 19:45:29.129477 | orchestrator | Monday 19 May 2025 19:43:42 +0000 (0:00:05.437) 0:06:01.680 ************ 2025-05-19 19:45:29.129487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.129494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 19:45:29.129505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.129536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.129542 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.129552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 19:45:29.129559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.129593 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.129600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.129606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 19:45:29.129616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 19:45:29.129633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:45:29.129639 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.129645 | orchestrator | 2025-05-19 19:45:29.129651 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-19 19:45:29.129658 | orchestrator | Monday 19 May 2025 19:43:43 +0000 (0:00:01.019) 0:06:02.699 ************ 2025-05-19 19:45:29.129664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 19:45:29.129671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}) [0m 2025-05-19 19:45:29.129687 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.129693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 19:45:29.129700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 19:45:29.129706 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.129712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 19:45:29.129718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 19:45:29.129725 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.129731 | orchestrator | 2025-05-19 19:45:29.129737 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-19 19:45:29.129743 | orchestrator | Monday 19 May 2025 19:43:44 +0000 (0:00:00.939) 0:06:03.638 ************ 2025-05-19 19:45:29.129749 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.129755 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.129762 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.129768 | orchestrator | 2025-05-19 19:45:29.129774 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-19 19:45:29.129780 | orchestrator | Monday 19 May 2025 19:43:45 +0000 (0:00:01.230) 0:06:04.869 ************ 2025-05-19 19:45:29.129786 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.129792 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.129798 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.129804 | orchestrator | 2025-05-19 19:45:29.129810 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-19 19:45:29.129821 | orchestrator | Monday 19 May 2025 19:43:48 +0000 (0:00:02.503) 0:06:07.372 ************ 2025-05-19 19:45:29.129827 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.129833 | orchestrator | 2025-05-19 19:45:29.129839 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-19 19:45:29.129845 | orchestrator | Monday 19 May 2025 19:43:50 +0000 (0:00:01.604) 0:06:08.976 ************ 2025-05-19 19:45:29.129855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:45:29.129862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:45:29.129880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:45:29.129888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:45:29.129902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:45:29.129910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:45:29.129933 | orchestrator | 2025-05-19 19:45:29.129943 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-19 19:45:29.129953 | orchestrator | Monday 19 May 2025 19:43:56 +0000 (0:00:06.880) 0:06:15.856 ************ 2025-05-19 19:45:29.129978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:45:29.129991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:45:29.130010 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.130045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:45:29.130052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:45:29.130059 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.130077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:45:29.130084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:45:29.130100 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.130106 | orchestrator | 2025-05-19 19:45:29.130113 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-19 19:45:29.130119 | orchestrator | Monday 19 May 2025 19:43:58 +0000 (0:00:01.143) 0:06:16.999 ************ 2025-05-19 19:45:29.130125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 19:45:29.130132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 19:45:29.130142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 19:45:29.130149 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.130155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 19:45:29.130161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 19:45:29.130168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 19:45:29.130174 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.130180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 19:45:29.130186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 19:45:29.130193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 19:45:29.130199 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.130205 | orchestrator | 2025-05-19 19:45:29.130211 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-19 19:45:29.130217 | orchestrator | Monday 19 May 2025 19:43:59 +0000 (0:00:01.587) 0:06:18.587 ************ 2025-05-19 19:45:29.130224 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.130230 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.130246 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.130252 | orchestrator | 2025-05-19 19:45:29.130259 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-19 19:45:29.130265 | orchestrator | Monday 19 May 2025 19:44:00 +0000 (0:00:00.767) 0:06:19.355 ************ 2025-05-19 19:45:29.130271 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.130281 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.130288 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.130294 | orchestrator | 2025-05-19 19:45:29.130300 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-19 19:45:29.130306 | orchestrator | Monday 19 May 2025 19:44:02 +0000 (0:00:01.902) 0:06:21.257 ************ 2025-05-19 19:45:29.130312 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.130318 | orchestrator | 2025-05-19 19:45:29.130324 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-19 19:45:29.130330 | orchestrator | Monday 19 May 2025 19:44:04 +0000 (0:00:01.953) 0:06:23.210 ************ 2025-05-19 19:45:29.130337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:45:29.130348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:45:29.130355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:45:29.130361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:45:29.130384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:45:29.130403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:45:29.130426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:45:29.130484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:45:29.130501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:45:29.130513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:45:29.130520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:45:29.130543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:45:29.130578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130636 | orchestrator | 2025-05-19 19:45:29.130642 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-19 19:45:29.130648 | orchestrator | Monday 19 May 2025 19:44:08 +0000 (0:00:04.360) 0:06:27.571 ************ 2025-05-19 19:45:29.130655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:45:29.130665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:45:29.130671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:45:29.130706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:45:29.130718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130749 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.130759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:45:29.130766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:45:29.130772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:45:29.130810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:45:29.130817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130850 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.130856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:45:29.130866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:45:29.130873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:45:29.130908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:45:29.130928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:45:29.130963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:45:29.130972 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.130981 | orchestrator | 2025-05-19 19:45:29.130990 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-19 19:45:29.130999 | orchestrator | Monday 19 May 2025 19:44:09 +0000 (0:00:01.307) 0:06:28.878 ************ 2025-05-19 19:45:29.131008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 19:45:29.131023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 19:45:29.131036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 19:45:29.131046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 19:45:29.131055 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 19:45:29.131074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 19:45:29.131085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 19:45:29.131096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 19:45:29.131107 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 19:45:29.131124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 19:45:29.131130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 19:45:29.131137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 19:45:29.131143 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131149 | orchestrator | 2025-05-19 19:45:29.131156 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-19 19:45:29.131162 | orchestrator | Monday 19 May 2025 19:44:11 +0000 (0:00:01.462) 0:06:30.340 ************ 2025-05-19 19:45:29.131168 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131174 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131180 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131186 | orchestrator | 2025-05-19 19:45:29.131192 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-19 19:45:29.131203 | orchestrator | Monday 19 May 2025 19:44:12 +0000 (0:00:01.137) 0:06:31.478 ************ 2025-05-19 19:45:29.131210 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131216 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131222 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131228 | orchestrator | 2025-05-19 19:45:29.131234 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-19 19:45:29.131240 | orchestrator | Monday 19 May 2025 19:44:14 +0000 (0:00:01.827) 0:06:33.305 ************ 2025-05-19 19:45:29.131246 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.131252 | orchestrator | 2025-05-19 19:45:29.131258 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-19 19:45:29.131265 | orchestrator | Monday 19 May 2025 19:44:15 +0000 (0:00:01.629) 0:06:34.934 ************ 2025-05-19 19:45:29.131274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:45:29.131282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:45:29.131293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 19:45:29.131300 | orchestrator | 2025-05-19 19:45:29.131306 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-19 19:45:29.131317 | orchestrator | Monday 19 May 2025 19:44:19 +0000 (0:00:03.072) 0:06:38.006 ************ 2025-05-19 19:45:29.131323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 19:45:29.131334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 19:45:29.131341 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131347 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 19:45:29.131360 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131366 | orchestrator | 2025-05-19 19:45:29.131372 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-19 19:45:29.131382 | orchestrator | Monday 19 May 2025 19:44:19 +0000 (0:00:00.720) 0:06:38.727 ************ 2025-05-19 19:45:29.131388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 19:45:29.131395 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 19:45:29.131412 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 19:45:29.131424 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131430 | orchestrator | 2025-05-19 19:45:29.131436 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-19 19:45:29.131442 | orchestrator | Monday 19 May 2025 19:44:20 +0000 (0:00:01.225) 0:06:39.952 ************ 2025-05-19 19:45:29.131448 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131454 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131460 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131466 | orchestrator | 2025-05-19 19:45:29.131472 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-19 19:45:29.131478 | orchestrator | Monday 19 May 2025 19:44:21 +0000 (0:00:00.799) 0:06:40.752 ************ 2025-05-19 19:45:29.131485 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131491 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131497 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131502 | orchestrator | 2025-05-19 19:45:29.131509 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-19 19:45:29.131515 | orchestrator | Monday 19 May 2025 19:44:23 +0000 (0:00:01.878) 0:06:42.630 ************ 2025-05-19 19:45:29.131521 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:45:29.131527 | orchestrator | 2025-05-19 19:45:29.131533 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-19 19:45:29.131539 | orchestrator | Monday 19 May 2025 19:44:25 +0000 (0:00:02.104) 0:06:44.735 ************ 2025-05-19 19:45:29.131551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.131559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.131569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.131581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.131591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.131598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 19:45:29.131604 | orchestrator | 2025-05-19 19:45:29.131610 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-19 19:45:29.131616 | orchestrator | Monday 19 May 2025 19:44:33 +0000 (0:00:08.072) 0:06:52.808 ************ 2025-05-19 19:45:29.131626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.131638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.131645 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.131661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.131667 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.131688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 19:45:29.131694 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131701 | orchestrator | 2025-05-19 19:45:29.131707 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-19 19:45:29.131713 | orchestrator | Monday 19 May 2025 19:44:34 +0000 (0:00:01.044) 0:06:53.853 ************ 2025-05-19 19:45:29.131720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131745 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131780 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 19:45:29.131816 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131822 | orchestrator | 2025-05-19 19:45:29.131829 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-19 19:45:29.131835 | orchestrator | Monday 19 May 2025 19:44:36 +0000 (0:00:01.886) 0:06:55.739 ************ 2025-05-19 19:45:29.131841 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.131850 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.131857 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.131863 | orchestrator | 2025-05-19 19:45:29.131869 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-19 19:45:29.131875 | orchestrator | Monday 19 May 2025 19:44:38 +0000 (0:00:01.593) 0:06:57.332 ************ 2025-05-19 19:45:29.131881 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.131887 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.131893 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.131899 | orchestrator | 2025-05-19 19:45:29.131905 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-19 19:45:29.131911 | orchestrator | Monday 19 May 2025 19:44:41 +0000 (0:00:02.813) 0:07:00.145 ************ 2025-05-19 19:45:29.131965 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.131974 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.131981 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.131987 | orchestrator | 2025-05-19 19:45:29.131993 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-19 19:45:29.131999 | orchestrator | Monday 19 May 2025 19:44:41 +0000 (0:00:00.320) 0:07:00.466 ************ 2025-05-19 19:45:29.132005 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132011 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132017 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132023 | orchestrator | 2025-05-19 19:45:29.132029 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-19 19:45:29.132036 | orchestrator | Monday 19 May 2025 19:44:42 +0000 (0:00:00.645) 0:07:01.112 ************ 2025-05-19 19:45:29.132042 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132048 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132054 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132060 | orchestrator | 2025-05-19 19:45:29.132066 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-19 19:45:29.132072 | orchestrator | Monday 19 May 2025 19:44:42 +0000 (0:00:00.625) 0:07:01.737 ************ 2025-05-19 19:45:29.132078 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132084 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132090 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132096 | orchestrator | 2025-05-19 19:45:29.132103 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-19 19:45:29.132109 | orchestrator | Monday 19 May 2025 19:44:43 +0000 (0:00:00.305) 0:07:02.042 ************ 2025-05-19 19:45:29.132115 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132128 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132134 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132140 | orchestrator | 2025-05-19 19:45:29.132146 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-19 19:45:29.132152 | orchestrator | Monday 19 May 2025 19:44:43 +0000 (0:00:00.656) 0:07:02.699 ************ 2025-05-19 19:45:29.132158 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132164 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132170 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132176 | orchestrator | 2025-05-19 19:45:29.132182 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-19 19:45:29.132189 | orchestrator | Monday 19 May 2025 19:44:44 +0000 (0:00:01.150) 0:07:03.850 ************ 2025-05-19 19:45:29.132198 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132205 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132211 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132216 | orchestrator | 2025-05-19 19:45:29.132222 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-19 19:45:29.132227 | orchestrator | Monday 19 May 2025 19:44:45 +0000 (0:00:00.705) 0:07:04.555 ************ 2025-05-19 19:45:29.132233 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132238 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132243 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132248 | orchestrator | 2025-05-19 19:45:29.132254 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-19 19:45:29.132259 | orchestrator | Monday 19 May 2025 19:44:46 +0000 (0:00:00.723) 0:07:05.279 ************ 2025-05-19 19:45:29.132264 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132270 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132275 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132280 | orchestrator | 2025-05-19 19:45:29.132286 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-19 19:45:29.132291 | orchestrator | Monday 19 May 2025 19:44:47 +0000 (0:00:01.409) 0:07:06.689 ************ 2025-05-19 19:45:29.132296 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132302 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132307 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132312 | orchestrator | 2025-05-19 19:45:29.132318 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-19 19:45:29.132323 | orchestrator | Monday 19 May 2025 19:44:49 +0000 (0:00:01.355) 0:07:08.044 ************ 2025-05-19 19:45:29.132328 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132334 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132339 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132344 | orchestrator | 2025-05-19 19:45:29.132350 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-19 19:45:29.132355 | orchestrator | Monday 19 May 2025 19:44:50 +0000 (0:00:00.967) 0:07:09.012 ************ 2025-05-19 19:45:29.132360 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.132366 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.132371 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.132376 | orchestrator | 2025-05-19 19:45:29.132382 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-19 19:45:29.132387 | orchestrator | Monday 19 May 2025 19:44:58 +0000 (0:00:08.928) 0:07:17.940 ************ 2025-05-19 19:45:29.132392 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132398 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132403 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132408 | orchestrator | 2025-05-19 19:45:29.132414 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-19 19:45:29.132419 | orchestrator | Monday 19 May 2025 19:45:00 +0000 (0:00:01.222) 0:07:19.163 ************ 2025-05-19 19:45:29.132424 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.132430 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.132435 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.132444 | orchestrator | 2025-05-19 19:45:29.132454 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-19 19:45:29.132459 | orchestrator | Monday 19 May 2025 19:45:11 +0000 (0:00:11.751) 0:07:30.915 ************ 2025-05-19 19:45:29.132465 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132470 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132475 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132481 | orchestrator | 2025-05-19 19:45:29.132486 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-19 19:45:29.132491 | orchestrator | Monday 19 May 2025 19:45:12 +0000 (0:00:00.754) 0:07:31.669 ************ 2025-05-19 19:45:29.132497 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:45:29.132502 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:45:29.132507 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:45:29.132513 | orchestrator | 2025-05-19 19:45:29.132518 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-19 19:45:29.132523 | orchestrator | Monday 19 May 2025 19:45:17 +0000 (0:00:04.612) 0:07:36.282 ************ 2025-05-19 19:45:29.132529 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132534 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132539 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132545 | orchestrator | 2025-05-19 19:45:29.132550 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-19 19:45:29.132555 | orchestrator | Monday 19 May 2025 19:45:17 +0000 (0:00:00.667) 0:07:36.949 ************ 2025-05-19 19:45:29.132561 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132566 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132572 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132577 | orchestrator | 2025-05-19 19:45:29.132582 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-19 19:45:29.132587 | orchestrator | Monday 19 May 2025 19:45:18 +0000 (0:00:00.344) 0:07:37.293 ************ 2025-05-19 19:45:29.132593 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132598 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132603 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132608 | orchestrator | 2025-05-19 19:45:29.132614 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-19 19:45:29.132619 | orchestrator | Monday 19 May 2025 19:45:19 +0000 (0:00:00.731) 0:07:38.025 ************ 2025-05-19 19:45:29.132625 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132630 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132635 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132640 | orchestrator | 2025-05-19 19:45:29.132646 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-19 19:45:29.132651 | orchestrator | Monday 19 May 2025 19:45:19 +0000 (0:00:00.741) 0:07:38.767 ************ 2025-05-19 19:45:29.132656 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132662 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132667 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132672 | orchestrator | 2025-05-19 19:45:29.132678 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-19 19:45:29.132683 | orchestrator | Monday 19 May 2025 19:45:20 +0000 (0:00:00.703) 0:07:39.470 ************ 2025-05-19 19:45:29.132689 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:45:29.132694 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:45:29.132702 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:45:29.132707 | orchestrator | 2025-05-19 19:45:29.132713 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-19 19:45:29.132718 | orchestrator | Monday 19 May 2025 19:45:20 +0000 (0:00:00.368) 0:07:39.839 ************ 2025-05-19 19:45:29.132724 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132729 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132734 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132740 | orchestrator | 2025-05-19 19:45:29.132745 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-19 19:45:29.132754 | orchestrator | Monday 19 May 2025 19:45:26 +0000 (0:00:05.212) 0:07:45.051 ************ 2025-05-19 19:45:29.132759 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:45:29.132765 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:45:29.132770 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:45:29.132775 | orchestrator | 2025-05-19 19:45:29.132781 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:45:29.132786 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-19 19:45:29.132792 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-19 19:45:29.132797 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-19 19:45:29.132803 | orchestrator | 2025-05-19 19:45:29.132808 | orchestrator | 2025-05-19 19:45:29.132813 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:45:29.132819 | orchestrator | Monday 19 May 2025 19:45:27 +0000 (0:00:01.223) 0:07:46.274 ************ 2025-05-19 19:45:29.132824 | orchestrator | =============================================================================== 2025-05-19 19:45:29.132829 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.75s 2025-05-19 19:45:29.132835 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.93s 2025-05-19 19:45:29.132840 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.07s 2025-05-19 19:45:29.132846 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.80s 2025-05-19 19:45:29.132851 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.88s 2025-05-19 19:45:29.132856 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.47s 2025-05-19 19:45:29.132862 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 6.38s 2025-05-19 19:45:29.132870 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.23s 2025-05-19 19:45:29.132875 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.88s 2025-05-19 19:45:29.132881 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 5.44s 2025-05-19 19:45:29.132886 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.36s 2025-05-19 19:45:29.132891 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.21s 2025-05-19 19:45:29.132897 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.11s 2025-05-19 19:45:29.132902 | orchestrator | loadbalancer : Ensuring proxysql service config subdirectories exist ---- 5.09s 2025-05-19 19:45:29.132907 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.00s 2025-05-19 19:45:29.132913 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.99s 2025-05-19 19:45:29.132933 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.98s 2025-05-19 19:45:29.132939 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.97s 2025-05-19 19:45:29.132944 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 4.87s 2025-05-19 19:45:29.132950 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.86s 2025-05-19 19:45:32.161847 | orchestrator | 2025-05-19 19:45:32 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:32.162281 | orchestrator | 2025-05-19 19:45:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:32.163292 | orchestrator | 2025-05-19 19:45:32 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:32.165382 | orchestrator | 2025-05-19 19:45:32 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:32.165418 | orchestrator | 2025-05-19 19:45:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:35.222271 | orchestrator | 2025-05-19 19:45:35 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:35.222393 | orchestrator | 2025-05-19 19:45:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:35.222406 | orchestrator | 2025-05-19 19:45:35 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:35.222825 | orchestrator | 2025-05-19 19:45:35 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:35.222871 | orchestrator | 2025-05-19 19:45:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:38.261477 | orchestrator | 2025-05-19 19:45:38 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:38.261853 | orchestrator | 2025-05-19 19:45:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:38.262289 | orchestrator | 2025-05-19 19:45:38 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:38.263081 | orchestrator | 2025-05-19 19:45:38 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:38.263118 | orchestrator | 2025-05-19 19:45:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:41.307650 | orchestrator | 2025-05-19 19:45:41 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:41.308169 | orchestrator | 2025-05-19 19:45:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:41.308792 | orchestrator | 2025-05-19 19:45:41 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:41.309633 | orchestrator | 2025-05-19 19:45:41 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:41.309667 | orchestrator | 2025-05-19 19:45:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:44.358250 | orchestrator | 2025-05-19 19:45:44 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:44.359001 | orchestrator | 2025-05-19 19:45:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:44.360715 | orchestrator | 2025-05-19 19:45:44 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:44.362845 | orchestrator | 2025-05-19 19:45:44 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:44.363184 | orchestrator | 2025-05-19 19:45:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:47.395401 | orchestrator | 2025-05-19 19:45:47 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:47.397991 | orchestrator | 2025-05-19 19:45:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:47.400739 | orchestrator | 2025-05-19 19:45:47 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:47.402831 | orchestrator | 2025-05-19 19:45:47 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:47.405475 | orchestrator | 2025-05-19 19:45:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:50.447097 | orchestrator | 2025-05-19 19:45:50 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:50.450185 | orchestrator | 2025-05-19 19:45:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:50.452201 | orchestrator | 2025-05-19 19:45:50 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:50.453266 | orchestrator | 2025-05-19 19:45:50 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:50.453316 | orchestrator | 2025-05-19 19:45:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:53.489023 | orchestrator | 2025-05-19 19:45:53 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:53.489276 | orchestrator | 2025-05-19 19:45:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:53.489861 | orchestrator | 2025-05-19 19:45:53 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:53.490520 | orchestrator | 2025-05-19 19:45:53 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:53.490545 | orchestrator | 2025-05-19 19:45:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:56.551098 | orchestrator | 2025-05-19 19:45:56 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:56.551250 | orchestrator | 2025-05-19 19:45:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:56.552641 | orchestrator | 2025-05-19 19:45:56 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:56.554668 | orchestrator | 2025-05-19 19:45:56 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:56.554759 | orchestrator | 2025-05-19 19:45:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:45:59.607832 | orchestrator | 2025-05-19 19:45:59 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:45:59.608059 | orchestrator | 2025-05-19 19:45:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:45:59.608743 | orchestrator | 2025-05-19 19:45:59 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:45:59.609187 | orchestrator | 2025-05-19 19:45:59 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:45:59.609211 | orchestrator | 2025-05-19 19:45:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:02.646246 | orchestrator | 2025-05-19 19:46:02 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:02.646657 | orchestrator | 2025-05-19 19:46:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:02.647584 | orchestrator | 2025-05-19 19:46:02 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:02.649049 | orchestrator | 2025-05-19 19:46:02 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:02.649824 | orchestrator | 2025-05-19 19:46:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:05.684006 | orchestrator | 2025-05-19 19:46:05 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:05.684719 | orchestrator | 2025-05-19 19:46:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:05.686175 | orchestrator | 2025-05-19 19:46:05 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:05.686790 | orchestrator | 2025-05-19 19:46:05 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:05.686847 | orchestrator | 2025-05-19 19:46:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:08.738707 | orchestrator | 2025-05-19 19:46:08 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:08.745162 | orchestrator | 2025-05-19 19:46:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:08.747464 | orchestrator | 2025-05-19 19:46:08 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:08.758405 | orchestrator | 2025-05-19 19:46:08 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:08.758496 | orchestrator | 2025-05-19 19:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:11.802853 | orchestrator | 2025-05-19 19:46:11 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:11.803056 | orchestrator | 2025-05-19 19:46:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:11.803073 | orchestrator | 2025-05-19 19:46:11 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:11.804813 | orchestrator | 2025-05-19 19:46:11 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:11.804845 | orchestrator | 2025-05-19 19:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:14.853706 | orchestrator | 2025-05-19 19:46:14 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:14.855670 | orchestrator | 2025-05-19 19:46:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:14.856434 | orchestrator | 2025-05-19 19:46:14 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:14.858346 | orchestrator | 2025-05-19 19:46:14 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:14.858402 | orchestrator | 2025-05-19 19:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:17.921156 | orchestrator | 2025-05-19 19:46:17 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:17.922199 | orchestrator | 2025-05-19 19:46:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:17.923851 | orchestrator | 2025-05-19 19:46:17 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:17.925739 | orchestrator | 2025-05-19 19:46:17 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:17.925786 | orchestrator | 2025-05-19 19:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:20.981786 | orchestrator | 2025-05-19 19:46:20 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:20.983766 | orchestrator | 2025-05-19 19:46:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:20.985489 | orchestrator | 2025-05-19 19:46:20 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:20.987105 | orchestrator | 2025-05-19 19:46:20 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:20.987525 | orchestrator | 2025-05-19 19:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:24.027923 | orchestrator | 2025-05-19 19:46:24 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:24.028043 | orchestrator | 2025-05-19 19:46:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:24.028385 | orchestrator | 2025-05-19 19:46:24 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:24.029370 | orchestrator | 2025-05-19 19:46:24 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:24.029440 | orchestrator | 2025-05-19 19:46:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:27.090163 | orchestrator | 2025-05-19 19:46:27 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:27.093084 | orchestrator | 2025-05-19 19:46:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:27.097138 | orchestrator | 2025-05-19 19:46:27 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:27.102573 | orchestrator | 2025-05-19 19:46:27 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:27.102640 | orchestrator | 2025-05-19 19:46:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:30.154280 | orchestrator | 2025-05-19 19:46:30 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:30.154640 | orchestrator | 2025-05-19 19:46:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:30.155978 | orchestrator | 2025-05-19 19:46:30 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:30.156996 | orchestrator | 2025-05-19 19:46:30 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:30.157107 | orchestrator | 2025-05-19 19:46:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:33.202397 | orchestrator | 2025-05-19 19:46:33 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:33.203769 | orchestrator | 2025-05-19 19:46:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:33.205146 | orchestrator | 2025-05-19 19:46:33 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:33.206479 | orchestrator | 2025-05-19 19:46:33 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:33.206508 | orchestrator | 2025-05-19 19:46:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:36.274365 | orchestrator | 2025-05-19 19:46:36 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:36.276917 | orchestrator | 2025-05-19 19:46:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:36.277961 | orchestrator | 2025-05-19 19:46:36 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:36.281097 | orchestrator | 2025-05-19 19:46:36 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:36.281222 | orchestrator | 2025-05-19 19:46:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:39.317744 | orchestrator | 2025-05-19 19:46:39 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:39.320652 | orchestrator | 2025-05-19 19:46:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:39.326008 | orchestrator | 2025-05-19 19:46:39 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:39.328395 | orchestrator | 2025-05-19 19:46:39 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:39.328532 | orchestrator | 2025-05-19 19:46:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:42.381890 | orchestrator | 2025-05-19 19:46:42 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:42.382013 | orchestrator | 2025-05-19 19:46:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:42.382106 | orchestrator | 2025-05-19 19:46:42 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:42.382115 | orchestrator | 2025-05-19 19:46:42 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:42.382123 | orchestrator | 2025-05-19 19:46:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:45.448159 | orchestrator | 2025-05-19 19:46:45 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:45.450095 | orchestrator | 2025-05-19 19:46:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:45.452320 | orchestrator | 2025-05-19 19:46:45 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:45.453694 | orchestrator | 2025-05-19 19:46:45 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:45.453792 | orchestrator | 2025-05-19 19:46:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:48.502846 | orchestrator | 2025-05-19 19:46:48 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:48.508428 | orchestrator | 2025-05-19 19:46:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:48.509650 | orchestrator | 2025-05-19 19:46:48 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:48.511586 | orchestrator | 2025-05-19 19:46:48 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:48.511696 | orchestrator | 2025-05-19 19:46:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:51.560458 | orchestrator | 2025-05-19 19:46:51 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:51.561404 | orchestrator | 2025-05-19 19:46:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:51.563300 | orchestrator | 2025-05-19 19:46:51 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:51.564622 | orchestrator | 2025-05-19 19:46:51 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:51.564709 | orchestrator | 2025-05-19 19:46:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:54.617717 | orchestrator | 2025-05-19 19:46:54 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:54.619004 | orchestrator | 2025-05-19 19:46:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:54.620675 | orchestrator | 2025-05-19 19:46:54 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:54.620745 | orchestrator | 2025-05-19 19:46:54 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:54.620768 | orchestrator | 2025-05-19 19:46:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:46:57.675701 | orchestrator | 2025-05-19 19:46:57 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:46:57.675923 | orchestrator | 2025-05-19 19:46:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:46:57.676178 | orchestrator | 2025-05-19 19:46:57 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:46:57.676890 | orchestrator | 2025-05-19 19:46:57 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:46:57.677208 | orchestrator | 2025-05-19 19:46:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:00.707239 | orchestrator | 2025-05-19 19:47:00 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:00.707351 | orchestrator | 2025-05-19 19:47:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:00.707718 | orchestrator | 2025-05-19 19:47:00 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:00.712643 | orchestrator | 2025-05-19 19:47:00 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:00.712809 | orchestrator | 2025-05-19 19:47:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:03.750469 | orchestrator | 2025-05-19 19:47:03 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:03.751865 | orchestrator | 2025-05-19 19:47:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:03.752994 | orchestrator | 2025-05-19 19:47:03 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:03.755070 | orchestrator | 2025-05-19 19:47:03 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:03.755119 | orchestrator | 2025-05-19 19:47:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:06.801698 | orchestrator | 2025-05-19 19:47:06 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:06.802813 | orchestrator | 2025-05-19 19:47:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:06.804390 | orchestrator | 2025-05-19 19:47:06 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:06.806011 | orchestrator | 2025-05-19 19:47:06 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:06.806231 | orchestrator | 2025-05-19 19:47:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:09.861383 | orchestrator | 2025-05-19 19:47:09 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:09.863990 | orchestrator | 2025-05-19 19:47:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:09.864871 | orchestrator | 2025-05-19 19:47:09 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:09.868338 | orchestrator | 2025-05-19 19:47:09 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:09.868398 | orchestrator | 2025-05-19 19:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:12.927426 | orchestrator | 2025-05-19 19:47:12 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:12.933139 | orchestrator | 2025-05-19 19:47:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:12.933924 | orchestrator | 2025-05-19 19:47:12 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:12.935045 | orchestrator | 2025-05-19 19:47:12 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:12.935162 | orchestrator | 2025-05-19 19:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:15.993219 | orchestrator | 2025-05-19 19:47:15 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:15.994085 | orchestrator | 2025-05-19 19:47:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:15.997570 | orchestrator | 2025-05-19 19:47:15 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:15.999819 | orchestrator | 2025-05-19 19:47:15 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:15.999980 | orchestrator | 2025-05-19 19:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:19.062684 | orchestrator | 2025-05-19 19:47:19 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:19.062809 | orchestrator | 2025-05-19 19:47:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:19.063576 | orchestrator | 2025-05-19 19:47:19 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:19.064593 | orchestrator | 2025-05-19 19:47:19 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:19.064617 | orchestrator | 2025-05-19 19:47:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:22.113461 | orchestrator | 2025-05-19 19:47:22 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:22.115295 | orchestrator | 2025-05-19 19:47:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:22.116792 | orchestrator | 2025-05-19 19:47:22 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:22.118895 | orchestrator | 2025-05-19 19:47:22 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:22.118912 | orchestrator | 2025-05-19 19:47:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:25.172284 | orchestrator | 2025-05-19 19:47:25 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:25.175083 | orchestrator | 2025-05-19 19:47:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:25.177939 | orchestrator | 2025-05-19 19:47:25 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:25.177979 | orchestrator | 2025-05-19 19:47:25 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:25.178012 | orchestrator | 2025-05-19 19:47:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:28.235891 | orchestrator | 2025-05-19 19:47:28 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:28.238007 | orchestrator | 2025-05-19 19:47:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:28.239609 | orchestrator | 2025-05-19 19:47:28 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:28.240954 | orchestrator | 2025-05-19 19:47:28 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:28.240991 | orchestrator | 2025-05-19 19:47:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:31.297349 | orchestrator | 2025-05-19 19:47:31 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:31.299048 | orchestrator | 2025-05-19 19:47:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:31.302180 | orchestrator | 2025-05-19 19:47:31 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:31.303780 | orchestrator | 2025-05-19 19:47:31 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:31.303849 | orchestrator | 2025-05-19 19:47:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:34.359333 | orchestrator | 2025-05-19 19:47:34 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:34.361019 | orchestrator | 2025-05-19 19:47:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:34.362107 | orchestrator | 2025-05-19 19:47:34 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:34.363287 | orchestrator | 2025-05-19 19:47:34 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:34.363540 | orchestrator | 2025-05-19 19:47:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:37.413331 | orchestrator | 2025-05-19 19:47:37 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:37.414902 | orchestrator | 2025-05-19 19:47:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:37.417701 | orchestrator | 2025-05-19 19:47:37 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:37.420819 | orchestrator | 2025-05-19 19:47:37 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:37.421356 | orchestrator | 2025-05-19 19:47:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:40.471598 | orchestrator | 2025-05-19 19:47:40 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:40.473409 | orchestrator | 2025-05-19 19:47:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:40.475741 | orchestrator | 2025-05-19 19:47:40 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:40.480415 | orchestrator | 2025-05-19 19:47:40 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:40.480501 | orchestrator | 2025-05-19 19:47:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:43.533297 | orchestrator | 2025-05-19 19:47:43 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:43.534350 | orchestrator | 2025-05-19 19:47:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:43.536249 | orchestrator | 2025-05-19 19:47:43 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:43.537601 | orchestrator | 2025-05-19 19:47:43 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:43.537735 | orchestrator | 2025-05-19 19:47:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:46.585051 | orchestrator | 2025-05-19 19:47:46 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:46.586410 | orchestrator | 2025-05-19 19:47:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:46.588054 | orchestrator | 2025-05-19 19:47:46 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:46.589919 | orchestrator | 2025-05-19 19:47:46 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:46.589963 | orchestrator | 2025-05-19 19:47:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:49.640220 | orchestrator | 2025-05-19 19:47:49 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:49.642297 | orchestrator | 2025-05-19 19:47:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:49.646356 | orchestrator | 2025-05-19 19:47:49 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:49.648911 | orchestrator | 2025-05-19 19:47:49 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:49.648962 | orchestrator | 2025-05-19 19:47:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:52.700776 | orchestrator | 2025-05-19 19:47:52 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:52.702167 | orchestrator | 2025-05-19 19:47:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:52.703799 | orchestrator | 2025-05-19 19:47:52 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:52.705185 | orchestrator | 2025-05-19 19:47:52 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:52.705229 | orchestrator | 2025-05-19 19:47:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:55.753948 | orchestrator | 2025-05-19 19:47:55 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state STARTED 2025-05-19 19:47:55.755165 | orchestrator | 2025-05-19 19:47:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:55.755816 | orchestrator | 2025-05-19 19:47:55 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:55.757033 | orchestrator | 2025-05-19 19:47:55 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:55.757083 | orchestrator | 2025-05-19 19:47:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:47:58.817098 | orchestrator | 2025-05-19 19:47:58.817236 | orchestrator | 2025-05-19 19:47:58.817263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:47:58.817284 | orchestrator | 2025-05-19 19:47:58.817302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:47:58.817321 | orchestrator | Monday 19 May 2025 19:45:31 +0000 (0:00:00.342) 0:00:00.342 ************ 2025-05-19 19:47:58.817339 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:47:58.817359 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:47:58.817376 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:47:58.817393 | orchestrator | 2025-05-19 19:47:58.817411 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:47:58.817428 | orchestrator | Monday 19 May 2025 19:45:32 +0000 (0:00:00.461) 0:00:00.804 ************ 2025-05-19 19:47:58.817446 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-19 19:47:58.817464 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-19 19:47:58.817481 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-19 19:47:58.817498 | orchestrator | 2025-05-19 19:47:58.817644 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-19 19:47:58.817664 | orchestrator | 2025-05-19 19:47:58.817683 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 19:47:58.817701 | orchestrator | Monday 19 May 2025 19:45:32 +0000 (0:00:00.328) 0:00:01.132 ************ 2025-05-19 19:47:58.817721 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:47:58.817741 | orchestrator | 2025-05-19 19:47:58.817760 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-19 19:47:58.817779 | orchestrator | Monday 19 May 2025 19:45:33 +0000 (0:00:00.791) 0:00:01.924 ************ 2025-05-19 19:47:58.817797 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 19:47:58.817816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 19:47:58.817837 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 19:47:58.817855 | orchestrator | 2025-05-19 19:47:58.817874 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-19 19:47:58.817893 | orchestrator | Monday 19 May 2025 19:45:34 +0000 (0:00:00.850) 0:00:02.774 ************ 2025-05-19 19:47:58.817938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.817998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.818157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.818190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.818225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.818265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.818289 | orchestrator | 2025-05-19 19:47:58.818309 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 19:47:58.818328 | orchestrator | Monday 19 May 2025 19:45:35 +0000 (0:00:01.686) 0:00:04.460 ************ 2025-05-19 19:47:58.818347 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:47:58.818367 | orchestrator | 2025-05-19 19:47:58.818387 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-19 19:47:58.818406 | orchestrator | Monday 19 May 2025 19:45:36 +0000 (0:00:00.824) 0:00:05.285 ************ 2025-05-19 19:47:58.818443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.818467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.818497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.818532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.818598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.818626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.818661 | orchestrator | 2025-05-19 19:47:58.818682 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-19 19:47:58.818701 | orchestrator | Monday 19 May 2025 19:45:39 +0000 (0:00:03.419) 0:00:08.704 ************ 2025-05-19 19:47:58.818728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:47:58.818751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:47:58.818773 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:47:58.818808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:47:58.818949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:47:58.818987 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:47:58.819017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:47:58.819040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:47:58.819061 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:47:58.819081 | orchestrator | 2025-05-19 19:47:58.819102 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-19 19:47:58.819121 | orchestrator | Monday 19 May 2025 19:45:41 +0000 (0:00:01.381) 0:00:10.086 ************ 2025-05-19 19:47:58.819154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:47:58.819176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:47:58.819208 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:47:58.819234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:47:58.819256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:47:58.819276 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:47:58.819306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 19:47:58.819327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 19:47:58.819359 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:47:58.819378 | orchestrator | 2025-05-19 19:47:58.819397 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-19 19:47:58.819416 | orchestrator | Monday 19 May 2025 19:45:42 +0000 (0:00:01.420) 0:00:11.506 ************ 2025-05-19 19:47:58.819436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.819464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.819486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.819521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.819563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.819626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.819645 | orchestrator | 2025-05-19 19:47:58.819663 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-19 19:47:58.819681 | orchestrator | Monday 19 May 2025 19:45:45 +0000 (0:00:03.001) 0:00:14.507 ************ 2025-05-19 19:47:58.819701 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:47:58.819720 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:47:58.819738 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:47:58.819756 | orchestrator | 2025-05-19 19:47:58.819774 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-19 19:47:58.819792 | orchestrator | Monday 19 May 2025 19:45:50 +0000 (0:00:04.728) 0:00:19.236 ************ 2025-05-19 19:47:58.819810 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:47:58.819828 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:47:58.819846 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:47:58.819865 | orchestrator | 2025-05-19 19:47:58.819884 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-19 19:47:58.819901 | orchestrator | Monday 19 May 2025 19:45:52 +0000 (0:00:01.665) 0:00:20.901 ************ 2025-05-19 19:47:58.819934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.819966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.819986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 19:47:58.820013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.820045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.820076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 19:47:58.820095 | orchestrator | 2025-05-19 19:47:58.820111 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 19:47:58.820131 | orchestrator | Monday 19 May 2025 19:45:54 +0000 (0:00:02.352) 0:00:23.254 ************ 2025-05-19 19:47:58.820149 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:47:58.820168 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:47:58.820187 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:47:58.820204 | orchestrator | 2025-05-19 19:47:58.820222 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 19:47:58.820239 | orchestrator | Monday 19 May 2025 19:45:55 +0000 (0:00:00.571) 0:00:23.825 ************ 2025-05-19 19:47:58.820256 | orchestrator | 2025-05-19 19:47:58.820273 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 19:47:58.820292 | orchestrator | Monday 19 May 2025 19:45:55 +0000 (0:00:00.447) 0:00:24.273 ************ 2025-05-19 19:47:58.820308 | orchestrator | 2025-05-19 19:47:58.820328 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 19:47:58.820348 | orchestrator | Monday 19 May 2025 19:45:55 +0000 (0:00:00.068) 0:00:24.342 ************ 2025-05-19 19:47:58.820367 | orchestrator | 2025-05-19 19:47:58.820387 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-19 19:47:58.820415 | orchestrator | Monday 19 May 2025 19:45:55 +0000 (0:00:00.062) 0:00:24.404 ************ 2025-05-19 19:47:58.820435 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:47:58.820455 | orchestrator | 2025-05-19 19:47:58.820475 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-19 19:47:58.820495 | orchestrator | Monday 19 May 2025 19:45:55 +0000 (0:00:00.275) 0:00:24.680 ************ 2025-05-19 19:47:58.820512 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:47:58.820528 | orchestrator | 2025-05-19 19:47:58.820544 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-19 19:47:58.820561 | orchestrator | Monday 19 May 2025 19:45:57 +0000 (0:00:01.306) 0:00:25.986 ************ 2025-05-19 19:47:58.820605 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:47:58.820623 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:47:58.820638 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:47:58.820654 | orchestrator | 2025-05-19 19:47:58.820669 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-19 19:47:58.820685 | orchestrator | Monday 19 May 2025 19:46:36 +0000 (0:00:39.178) 0:01:05.165 ************ 2025-05-19 19:47:58.820715 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:47:58.820733 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:47:58.820750 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:47:58.820766 | orchestrator | 2025-05-19 19:47:58.820782 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 19:47:58.820798 | orchestrator | Monday 19 May 2025 19:47:45 +0000 (0:01:09.084) 0:02:14.249 ************ 2025-05-19 19:47:58.820815 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:47:58.820833 | orchestrator | 2025-05-19 19:47:58.820849 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-19 19:47:58.820865 | orchestrator | Monday 19 May 2025 19:47:46 +0000 (0:00:00.780) 0:02:15.030 ************ 2025-05-19 19:47:58.820882 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:47:58.820900 | orchestrator | 2025-05-19 19:47:58.820917 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-19 19:47:58.820933 | orchestrator | Monday 19 May 2025 19:47:49 +0000 (0:00:02.800) 0:02:17.830 ************ 2025-05-19 19:47:58.820952 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:47:58.820976 | orchestrator | 2025-05-19 19:47:58.820996 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-19 19:47:58.821015 | orchestrator | Monday 19 May 2025 19:47:51 +0000 (0:00:02.557) 0:02:20.388 ************ 2025-05-19 19:47:58.821035 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:47:58.821053 | orchestrator | 2025-05-19 19:47:58.821072 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-19 19:47:58.821091 | orchestrator | Monday 19 May 2025 19:47:54 +0000 (0:00:03.018) 0:02:23.406 ************ 2025-05-19 19:47:58.821111 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:47:58.821130 | orchestrator | 2025-05-19 19:47:58.821165 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:47:58.821188 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:47:58.821210 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:47:58.821229 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 19:47:58.821248 | orchestrator | 2025-05-19 19:47:58.821267 | orchestrator | 2025-05-19 19:47:58.821286 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:47:58.821306 | orchestrator | Monday 19 May 2025 19:47:57 +0000 (0:00:02.925) 0:02:26.331 ************ 2025-05-19 19:47:58.821325 | orchestrator | =============================================================================== 2025-05-19 19:47:58.821344 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 69.08s 2025-05-19 19:47:58.821363 | orchestrator | opensearch : Restart opensearch container ------------------------------ 39.18s 2025-05-19 19:47:58.821382 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.73s 2025-05-19 19:47:58.821400 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.42s 2025-05-19 19:47:58.821420 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.02s 2025-05-19 19:47:58.821438 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.00s 2025-05-19 19:47:58.821457 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.93s 2025-05-19 19:47:58.821477 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.80s 2025-05-19 19:47:58.821497 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.56s 2025-05-19 19:47:58.821516 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.35s 2025-05-19 19:47:58.821534 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.69s 2025-05-19 19:47:58.821564 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.67s 2025-05-19 19:47:58.821627 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.42s 2025-05-19 19:47:58.821644 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.38s 2025-05-19 19:47:58.821661 | orchestrator | opensearch : Perform a flush -------------------------------------------- 1.31s 2025-05-19 19:47:58.821677 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.85s 2025-05-19 19:47:58.821697 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.82s 2025-05-19 19:47:58.821728 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.79s 2025-05-19 19:47:58.821748 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2025-05-19 19:47:58.821768 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.58s 2025-05-19 19:47:58.821787 | orchestrator | 2025-05-19 19:47:58 | INFO  | Task d97e0f86-9318-4ba6-9e67-fd9f42b2d5ef is in state SUCCESS 2025-05-19 19:47:58.821806 | orchestrator | 2025-05-19 19:47:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:47:58.821826 | orchestrator | 2025-05-19 19:47:58 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:47:58.822058 | orchestrator | 2025-05-19 19:47:58 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:47:58.822088 | orchestrator | 2025-05-19 19:47:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:01.867666 | orchestrator | 2025-05-19 19:48:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:01.868918 | orchestrator | 2025-05-19 19:48:01 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:01.871145 | orchestrator | 2025-05-19 19:48:01 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:01.871190 | orchestrator | 2025-05-19 19:48:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:04.917988 | orchestrator | 2025-05-19 19:48:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:04.919459 | orchestrator | 2025-05-19 19:48:04 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:04.922852 | orchestrator | 2025-05-19 19:48:04 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:04.922906 | orchestrator | 2025-05-19 19:48:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:07.971293 | orchestrator | 2025-05-19 19:48:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:07.972727 | orchestrator | 2025-05-19 19:48:07 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:07.974313 | orchestrator | 2025-05-19 19:48:07 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:07.974360 | orchestrator | 2025-05-19 19:48:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:11.035259 | orchestrator | 2025-05-19 19:48:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:11.037709 | orchestrator | 2025-05-19 19:48:11 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:11.040373 | orchestrator | 2025-05-19 19:48:11 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:11.040434 | orchestrator | 2025-05-19 19:48:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:14.089027 | orchestrator | 2025-05-19 19:48:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:14.090675 | orchestrator | 2025-05-19 19:48:14 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:14.092223 | orchestrator | 2025-05-19 19:48:14 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:14.092994 | orchestrator | 2025-05-19 19:48:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:17.140890 | orchestrator | 2025-05-19 19:48:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:17.141918 | orchestrator | 2025-05-19 19:48:17 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:17.144604 | orchestrator | 2025-05-19 19:48:17 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:17.144660 | orchestrator | 2025-05-19 19:48:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:20.183596 | orchestrator | 2025-05-19 19:48:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:20.183807 | orchestrator | 2025-05-19 19:48:20 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:20.184752 | orchestrator | 2025-05-19 19:48:20 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:20.184768 | orchestrator | 2025-05-19 19:48:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:23.242733 | orchestrator | 2025-05-19 19:48:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:23.242858 | orchestrator | 2025-05-19 19:48:23 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:23.253066 | orchestrator | 2025-05-19 19:48:23 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:23.253168 | orchestrator | 2025-05-19 19:48:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:26.342135 | orchestrator | 2025-05-19 19:48:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:26.342251 | orchestrator | 2025-05-19 19:48:26 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:26.342838 | orchestrator | 2025-05-19 19:48:26 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:26.342939 | orchestrator | 2025-05-19 19:48:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:29.403256 | orchestrator | 2025-05-19 19:48:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:29.403364 | orchestrator | 2025-05-19 19:48:29 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:29.404002 | orchestrator | 2025-05-19 19:48:29 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:29.404040 | orchestrator | 2025-05-19 19:48:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:32.462170 | orchestrator | 2025-05-19 19:48:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:32.462319 | orchestrator | 2025-05-19 19:48:32 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:32.463836 | orchestrator | 2025-05-19 19:48:32 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:32.463903 | orchestrator | 2025-05-19 19:48:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:35.519085 | orchestrator | 2025-05-19 19:48:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:35.520101 | orchestrator | 2025-05-19 19:48:35 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:35.520932 | orchestrator | 2025-05-19 19:48:35 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:35.522226 | orchestrator | 2025-05-19 19:48:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:38.574322 | orchestrator | 2025-05-19 19:48:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:38.576538 | orchestrator | 2025-05-19 19:48:38 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:38.578837 | orchestrator | 2025-05-19 19:48:38 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:38.578891 | orchestrator | 2025-05-19 19:48:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:41.638342 | orchestrator | 2025-05-19 19:48:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:41.638861 | orchestrator | 2025-05-19 19:48:41 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:41.639672 | orchestrator | 2025-05-19 19:48:41 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:41.639697 | orchestrator | 2025-05-19 19:48:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:44.701865 | orchestrator | 2025-05-19 19:48:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:44.702005 | orchestrator | 2025-05-19 19:48:44 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:44.705153 | orchestrator | 2025-05-19 19:48:44 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:44.705275 | orchestrator | 2025-05-19 19:48:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:47.768097 | orchestrator | 2025-05-19 19:48:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:47.768202 | orchestrator | 2025-05-19 19:48:47 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:47.769025 | orchestrator | 2025-05-19 19:48:47 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:47.769128 | orchestrator | 2025-05-19 19:48:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:50.834545 | orchestrator | 2025-05-19 19:48:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:50.835994 | orchestrator | 2025-05-19 19:48:50 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:50.837741 | orchestrator | 2025-05-19 19:48:50 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:50.837770 | orchestrator | 2025-05-19 19:48:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:53.883050 | orchestrator | 2025-05-19 19:48:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:53.884339 | orchestrator | 2025-05-19 19:48:53 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:53.886819 | orchestrator | 2025-05-19 19:48:53 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:53.887211 | orchestrator | 2025-05-19 19:48:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:56.932992 | orchestrator | 2025-05-19 19:48:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:56.935168 | orchestrator | 2025-05-19 19:48:56 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:56.937273 | orchestrator | 2025-05-19 19:48:56 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:56.937367 | orchestrator | 2025-05-19 19:48:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:48:59.992997 | orchestrator | 2025-05-19 19:48:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:48:59.996140 | orchestrator | 2025-05-19 19:48:59 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:48:59.996197 | orchestrator | 2025-05-19 19:48:59 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state STARTED 2025-05-19 19:48:59.996210 | orchestrator | 2025-05-19 19:48:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:03.058880 | orchestrator | 2025-05-19 19:49:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:03.060280 | orchestrator | 2025-05-19 19:49:03 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:03.063576 | orchestrator | 2025-05-19 19:49:03 | INFO  | Task 28d4e72c-89e6-4fae-83eb-873af121443f is in state SUCCESS 2025-05-19 19:49:03.065101 | orchestrator | 2025-05-19 19:49:03.065123 | orchestrator | 2025-05-19 19:49:03.065131 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-19 19:49:03.065138 | orchestrator | 2025-05-19 19:49:03.065145 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-19 19:49:03.065152 | orchestrator | Monday 19 May 2025 19:45:31 +0000 (0:00:00.171) 0:00:00.171 ************ 2025-05-19 19:49:03.065158 | orchestrator | ok: [localhost] => { 2025-05-19 19:49:03.065167 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-19 19:49:03.065174 | orchestrator | } 2025-05-19 19:49:03.065181 | orchestrator | 2025-05-19 19:49:03.065188 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-19 19:49:03.065194 | orchestrator | Monday 19 May 2025 19:45:31 +0000 (0:00:00.050) 0:00:00.221 ************ 2025-05-19 19:49:03.065202 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-19 19:49:03.065210 | orchestrator | ...ignoring 2025-05-19 19:49:03.065217 | orchestrator | 2025-05-19 19:49:03.065223 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-19 19:49:03.065229 | orchestrator | Monday 19 May 2025 19:45:34 +0000 (0:00:02.552) 0:00:02.774 ************ 2025-05-19 19:49:03.065235 | orchestrator | skipping: [localhost] 2025-05-19 19:49:03.065241 | orchestrator | 2025-05-19 19:49:03.065248 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-19 19:49:03.065254 | orchestrator | Monday 19 May 2025 19:45:34 +0000 (0:00:00.057) 0:00:02.831 ************ 2025-05-19 19:49:03.065260 | orchestrator | ok: [localhost] 2025-05-19 19:49:03.065266 | orchestrator | 2025-05-19 19:49:03.065272 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:49:03.065278 | orchestrator | 2025-05-19 19:49:03.065284 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:49:03.065290 | orchestrator | Monday 19 May 2025 19:45:34 +0000 (0:00:00.286) 0:00:03.118 ************ 2025-05-19 19:49:03.065296 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.065303 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.065309 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.065315 | orchestrator | 2025-05-19 19:49:03.065321 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:49:03.065327 | orchestrator | Monday 19 May 2025 19:45:35 +0000 (0:00:00.702) 0:00:03.820 ************ 2025-05-19 19:49:03.065333 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 19:49:03.065340 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 19:49:03.065367 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 19:49:03.065373 | orchestrator | 2025-05-19 19:49:03.065379 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 19:49:03.065385 | orchestrator | 2025-05-19 19:49:03.065423 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 19:49:03.065441 | orchestrator | Monday 19 May 2025 19:45:35 +0000 (0:00:00.501) 0:00:04.321 ************ 2025-05-19 19:49:03.065447 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:03.065453 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 19:49:03.065459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 19:49:03.065464 | orchestrator | 2025-05-19 19:49:03.065470 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 19:49:03.065476 | orchestrator | Monday 19 May 2025 19:45:36 +0000 (0:00:00.689) 0:00:05.011 ************ 2025-05-19 19:49:03.065482 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:03.065488 | orchestrator | 2025-05-19 19:49:03.065494 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-19 19:49:03.065500 | orchestrator | Monday 19 May 2025 19:45:37 +0000 (0:00:00.767) 0:00:05.778 ************ 2025-05-19 19:49:03.065519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065590 | orchestrator | 2025-05-19 19:49:03.065596 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-19 19:49:03.065605 | orchestrator | Monday 19 May 2025 19:45:41 +0000 (0:00:04.732) 0:00:10.511 ************ 2025-05-19 19:49:03.065611 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.065618 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.065624 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.065629 | orchestrator | 2025-05-19 19:49:03.065635 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-19 19:49:03.065641 | orchestrator | Monday 19 May 2025 19:45:42 +0000 (0:00:00.966) 0:00:11.477 ************ 2025-05-19 19:49:03.065647 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.065653 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.065659 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.065664 | orchestrator | 2025-05-19 19:49:03.065670 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-19 19:49:03.065676 | orchestrator | Monday 19 May 2025 19:45:44 +0000 (0:00:01.746) 0:00:13.224 ************ 2025-05-19 19:49:03.065687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065742 | orchestrator | 2025-05-19 19:49:03.065748 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-19 19:49:03.065754 | orchestrator | Monday 19 May 2025 19:45:51 +0000 (0:00:07.299) 0:00:20.523 ************ 2025-05-19 19:49:03.065763 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.065769 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.065774 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.065780 | orchestrator | 2025-05-19 19:49:03.065786 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-19 19:49:03.065792 | orchestrator | Monday 19 May 2025 19:45:52 +0000 (0:00:01.023) 0:00:21.547 ************ 2025-05-19 19:49:03.065797 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.065803 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:03.065809 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:03.065815 | orchestrator | 2025-05-19 19:49:03.065820 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-19 19:49:03.065826 | orchestrator | Monday 19 May 2025 19:46:01 +0000 (0:00:09.041) 0:00:30.588 ************ 2025-05-19 19:49:03.065837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 19:49:03.065886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-19 19:49:03.065892 | orchestrator | 2025-05-19 19:49:03.065898 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-19 19:49:03.065904 | orchestrator | Monday 19 May 2025 19:46:06 +0000 (0:00:04.780) 0:00:35.369 ************ 2025-05-19 19:49:03.065910 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.065916 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:03.065924 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:03.065930 | orchestrator | 2025-05-19 19:49:03.065936 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-19 19:49:03.065942 | orchestrator | Monday 19 May 2025 19:46:07 +0000 (0:00:01.104) 0:00:36.474 ************ 2025-05-19 19:49:03.065947 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.065953 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.065959 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.065965 | orchestrator | 2025-05-19 19:49:03.065970 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-19 19:49:03.065976 | orchestrator | Monday 19 May 2025 19:46:08 +0000 (0:00:00.450) 0:00:36.925 ************ 2025-05-19 19:49:03.065982 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.065987 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.065993 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.065999 | orchestrator | 2025-05-19 19:49:03.066005 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-19 19:49:03.066010 | orchestrator | Monday 19 May 2025 19:46:08 +0000 (0:00:00.319) 0:00:37.244 ************ 2025-05-19 19:49:03.066056 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-19 19:49:03.066064 | orchestrator | ...ignoring 2025-05-19 19:49:03.066070 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-19 19:49:03.066076 | orchestrator | ...ignoring 2025-05-19 19:49:03.066082 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-19 19:49:03.066092 | orchestrator | ...ignoring 2025-05-19 19:49:03.066098 | orchestrator | 2025-05-19 19:49:03.066104 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-19 19:49:03.066109 | orchestrator | Monday 19 May 2025 19:46:19 +0000 (0:00:11.144) 0:00:48.388 ************ 2025-05-19 19:49:03.066115 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066121 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.066127 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.066132 | orchestrator | 2025-05-19 19:49:03.066138 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-19 19:49:03.066144 | orchestrator | Monday 19 May 2025 19:46:20 +0000 (0:00:00.743) 0:00:49.132 ************ 2025-05-19 19:49:03.066150 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066156 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066161 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066167 | orchestrator | 2025-05-19 19:49:03.066173 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-19 19:49:03.066179 | orchestrator | Monday 19 May 2025 19:46:21 +0000 (0:00:00.630) 0:00:49.763 ************ 2025-05-19 19:49:03.066185 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066190 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066196 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066202 | orchestrator | 2025-05-19 19:49:03.066212 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-19 19:49:03.066218 | orchestrator | Monday 19 May 2025 19:46:21 +0000 (0:00:00.442) 0:00:50.205 ************ 2025-05-19 19:49:03.066224 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066230 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066241 | orchestrator | 2025-05-19 19:49:03.066247 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-19 19:49:03.066253 | orchestrator | Monday 19 May 2025 19:46:22 +0000 (0:00:00.709) 0:00:50.914 ************ 2025-05-19 19:49:03.066259 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066265 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.066271 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.066276 | orchestrator | 2025-05-19 19:49:03.066282 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-19 19:49:03.066288 | orchestrator | Monday 19 May 2025 19:46:22 +0000 (0:00:00.666) 0:00:51.581 ************ 2025-05-19 19:49:03.066294 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066300 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066306 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066311 | orchestrator | 2025-05-19 19:49:03.066317 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 19:49:03.066323 | orchestrator | Monday 19 May 2025 19:46:23 +0000 (0:00:00.697) 0:00:52.278 ************ 2025-05-19 19:49:03.066328 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066334 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066340 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-19 19:49:03.066346 | orchestrator | 2025-05-19 19:49:03.066351 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-19 19:49:03.066357 | orchestrator | Monday 19 May 2025 19:46:24 +0000 (0:00:00.583) 0:00:52.861 ************ 2025-05-19 19:49:03.066363 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.066369 | orchestrator | 2025-05-19 19:49:03.066375 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-19 19:49:03.066380 | orchestrator | Monday 19 May 2025 19:46:35 +0000 (0:00:11.176) 0:01:04.038 ************ 2025-05-19 19:49:03.066386 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066411 | orchestrator | 2025-05-19 19:49:03.066417 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 19:49:03.066423 | orchestrator | Monday 19 May 2025 19:46:35 +0000 (0:00:00.150) 0:01:04.188 ************ 2025-05-19 19:49:03.066434 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066440 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066446 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066452 | orchestrator | 2025-05-19 19:49:03.066458 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-19 19:49:03.066464 | orchestrator | Monday 19 May 2025 19:46:36 +0000 (0:00:01.162) 0:01:05.351 ************ 2025-05-19 19:49:03.066470 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.066476 | orchestrator | 2025-05-19 19:49:03.066485 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-19 19:49:03.066492 | orchestrator | Monday 19 May 2025 19:46:47 +0000 (0:00:11.033) 0:01:16.385 ************ 2025-05-19 19:49:03.066497 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066503 | orchestrator | 2025-05-19 19:49:03.066509 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-19 19:49:03.066515 | orchestrator | Monday 19 May 2025 19:46:49 +0000 (0:00:01.672) 0:01:18.058 ************ 2025-05-19 19:49:03.066521 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066526 | orchestrator | 2025-05-19 19:49:03.066532 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-19 19:49:03.066538 | orchestrator | Monday 19 May 2025 19:46:52 +0000 (0:00:02.660) 0:01:20.718 ************ 2025-05-19 19:49:03.066544 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.066550 | orchestrator | 2025-05-19 19:49:03.066555 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-19 19:49:03.066561 | orchestrator | Monday 19 May 2025 19:46:52 +0000 (0:00:00.123) 0:01:20.841 ************ 2025-05-19 19:49:03.066567 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066573 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066579 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.066584 | orchestrator | 2025-05-19 19:49:03.066590 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-19 19:49:03.066596 | orchestrator | Monday 19 May 2025 19:46:52 +0000 (0:00:00.483) 0:01:21.325 ************ 2025-05-19 19:49:03.066602 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.066608 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:03.066613 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:03.066619 | orchestrator | 2025-05-19 19:49:03.066625 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-19 19:49:03.066631 | orchestrator | Monday 19 May 2025 19:46:53 +0000 (0:00:00.539) 0:01:21.865 ************ 2025-05-19 19:49:03.066636 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 19:49:03.066642 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.066648 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:03.066654 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:03.066660 | orchestrator | 2025-05-19 19:49:03.066665 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 19:49:03.066671 | orchestrator | skipping: no hosts matched 2025-05-19 19:49:03.066677 | orchestrator | 2025-05-19 19:49:03.066683 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 19:49:03.066688 | orchestrator | 2025-05-19 19:49:03.066694 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 19:49:03.066700 | orchestrator | Monday 19 May 2025 19:47:06 +0000 (0:00:13.654) 0:01:35.519 ************ 2025-05-19 19:49:03.066705 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:03.066711 | orchestrator | 2025-05-19 19:49:03.066717 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 19:49:03.066723 | orchestrator | Monday 19 May 2025 19:47:25 +0000 (0:00:18.728) 0:01:54.247 ************ 2025-05-19 19:49:03.066733 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.066739 | orchestrator | 2025-05-19 19:49:03.066745 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 19:49:03.066758 | orchestrator | Monday 19 May 2025 19:47:46 +0000 (0:00:20.576) 0:02:14.824 ************ 2025-05-19 19:49:03.066764 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.066770 | orchestrator | 2025-05-19 19:49:03.066775 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 19:49:03.066781 | orchestrator | 2025-05-19 19:49:03.066787 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 19:49:03.066793 | orchestrator | Monday 19 May 2025 19:47:48 +0000 (0:00:02.669) 0:02:17.493 ************ 2025-05-19 19:49:03.066799 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:03.066805 | orchestrator | 2025-05-19 19:49:03.066810 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 19:49:03.066816 | orchestrator | Monday 19 May 2025 19:48:08 +0000 (0:00:19.683) 0:02:37.176 ************ 2025-05-19 19:49:03.066822 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.066828 | orchestrator | 2025-05-19 19:49:03.066834 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 19:49:03.066839 | orchestrator | Monday 19 May 2025 19:48:24 +0000 (0:00:15.571) 0:02:52.748 ************ 2025-05-19 19:49:03.066845 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.066851 | orchestrator | 2025-05-19 19:49:03.066857 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 19:49:03.066863 | orchestrator | 2025-05-19 19:49:03.066869 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 19:49:03.066874 | orchestrator | Monday 19 May 2025 19:48:26 +0000 (0:00:02.651) 0:02:55.400 ************ 2025-05-19 19:49:03.066880 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.066886 | orchestrator | 2025-05-19 19:49:03.066892 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 19:49:03.066898 | orchestrator | Monday 19 May 2025 19:48:40 +0000 (0:00:14.092) 0:03:09.492 ************ 2025-05-19 19:49:03.066903 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066909 | orchestrator | 2025-05-19 19:49:03.066915 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 19:49:03.066921 | orchestrator | Monday 19 May 2025 19:48:45 +0000 (0:00:04.592) 0:03:14.086 ************ 2025-05-19 19:49:03.066927 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.066932 | orchestrator | 2025-05-19 19:49:03.066938 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 19:49:03.066944 | orchestrator | 2025-05-19 19:49:03.066950 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 19:49:03.066955 | orchestrator | Monday 19 May 2025 19:48:48 +0000 (0:00:02.666) 0:03:16.753 ************ 2025-05-19 19:49:03.066961 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:03.066967 | orchestrator | 2025-05-19 19:49:03.066973 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-19 19:49:03.066982 | orchestrator | Monday 19 May 2025 19:48:48 +0000 (0:00:00.864) 0:03:17.617 ************ 2025-05-19 19:49:03.066988 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.066994 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.067000 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.067006 | orchestrator | 2025-05-19 19:49:03.067011 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-19 19:49:03.067017 | orchestrator | Monday 19 May 2025 19:48:51 +0000 (0:00:02.696) 0:03:20.313 ************ 2025-05-19 19:49:03.067023 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.067029 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.067035 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.067041 | orchestrator | 2025-05-19 19:49:03.067047 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-19 19:49:03.067052 | orchestrator | Monday 19 May 2025 19:48:53 +0000 (0:00:02.285) 0:03:22.599 ************ 2025-05-19 19:49:03.067058 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.067064 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.067074 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.067080 | orchestrator | 2025-05-19 19:49:03.067086 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-19 19:49:03.067091 | orchestrator | Monday 19 May 2025 19:48:56 +0000 (0:00:02.481) 0:03:25.080 ************ 2025-05-19 19:49:03.067097 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.067103 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.067109 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:03.067115 | orchestrator | 2025-05-19 19:49:03.067121 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-19 19:49:03.067126 | orchestrator | Monday 19 May 2025 19:48:58 +0000 (0:00:02.379) 0:03:27.460 ************ 2025-05-19 19:49:03.067132 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:03.067138 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:03.067144 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:03.067149 | orchestrator | 2025-05-19 19:49:03.067155 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 19:49:03.067161 | orchestrator | Monday 19 May 2025 19:49:02 +0000 (0:00:03.254) 0:03:30.715 ************ 2025-05-19 19:49:03.067166 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:03.067172 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:03.067178 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:03.067184 | orchestrator | 2025-05-19 19:49:03.067189 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:49:03.067195 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-19 19:49:03.067201 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-19 19:49:03.067213 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-19 19:49:03.067219 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-19 19:49:03.067225 | orchestrator | 2025-05-19 19:49:03.067231 | orchestrator | 2025-05-19 19:49:03.067237 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:49:03.067242 | orchestrator | Monday 19 May 2025 19:49:02 +0000 (0:00:00.387) 0:03:31.102 ************ 2025-05-19 19:49:03.067248 | orchestrator | =============================================================================== 2025-05-19 19:49:03.067254 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.41s 2025-05-19 19:49:03.067260 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.15s 2025-05-19 19:49:03.067265 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.09s 2025-05-19 19:49:03.067271 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 13.65s 2025-05-19 19:49:03.067277 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.18s 2025-05-19 19:49:03.067283 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.14s 2025-05-19 19:49:03.067288 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 11.03s 2025-05-19 19:49:03.067294 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 9.04s 2025-05-19 19:49:03.067300 | orchestrator | mariadb : Copying over config.json files for services ------------------- 7.30s 2025-05-19 19:49:03.067305 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.32s 2025-05-19 19:49:03.067311 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.78s 2025-05-19 19:49:03.067317 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.73s 2025-05-19 19:49:03.067323 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.59s 2025-05-19 19:49:03.067333 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.26s 2025-05-19 19:49:03.067338 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.70s 2025-05-19 19:49:03.067344 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.67s 2025-05-19 19:49:03.067350 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.66s 2025-05-19 19:49:03.067355 | orchestrator | Check MariaDB service --------------------------------------------------- 2.55s 2025-05-19 19:49:03.067361 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.48s 2025-05-19 19:49:03.067370 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.38s 2025-05-19 19:49:03.067376 | orchestrator | 2025-05-19 19:49:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:06.124481 | orchestrator | 2025-05-19 19:49:06 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:06.125207 | orchestrator | 2025-05-19 19:49:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:06.125883 | orchestrator | 2025-05-19 19:49:06 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:06.126881 | orchestrator | 2025-05-19 19:49:06 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:06.126895 | orchestrator | 2025-05-19 19:49:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:09.191441 | orchestrator | 2025-05-19 19:49:09 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:09.192561 | orchestrator | 2025-05-19 19:49:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:09.193530 | orchestrator | 2025-05-19 19:49:09 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:09.194716 | orchestrator | 2025-05-19 19:49:09 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:09.194795 | orchestrator | 2025-05-19 19:49:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:12.251115 | orchestrator | 2025-05-19 19:49:12 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:12.251235 | orchestrator | 2025-05-19 19:49:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:12.251251 | orchestrator | 2025-05-19 19:49:12 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:12.251263 | orchestrator | 2025-05-19 19:49:12 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:12.251274 | orchestrator | 2025-05-19 19:49:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:15.297867 | orchestrator | 2025-05-19 19:49:15 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:15.298310 | orchestrator | 2025-05-19 19:49:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:15.298989 | orchestrator | 2025-05-19 19:49:15 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:15.299921 | orchestrator | 2025-05-19 19:49:15 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:15.299974 | orchestrator | 2025-05-19 19:49:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:18.347604 | orchestrator | 2025-05-19 19:49:18 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:18.350711 | orchestrator | 2025-05-19 19:49:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:18.351846 | orchestrator | 2025-05-19 19:49:18 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:18.354738 | orchestrator | 2025-05-19 19:49:18 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:18.354818 | orchestrator | 2025-05-19 19:49:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:21.403755 | orchestrator | 2025-05-19 19:49:21 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:21.403979 | orchestrator | 2025-05-19 19:49:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:21.404925 | orchestrator | 2025-05-19 19:49:21 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:21.406266 | orchestrator | 2025-05-19 19:49:21 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:21.406278 | orchestrator | 2025-05-19 19:49:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:24.453837 | orchestrator | 2025-05-19 19:49:24 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:24.455004 | orchestrator | 2025-05-19 19:49:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:24.458259 | orchestrator | 2025-05-19 19:49:24 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:24.459619 | orchestrator | 2025-05-19 19:49:24 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:24.459721 | orchestrator | 2025-05-19 19:49:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:27.502794 | orchestrator | 2025-05-19 19:49:27 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:27.505264 | orchestrator | 2025-05-19 19:49:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:27.505388 | orchestrator | 2025-05-19 19:49:27 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state STARTED 2025-05-19 19:49:27.505408 | orchestrator | 2025-05-19 19:49:27 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:27.505423 | orchestrator | 2025-05-19 19:49:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:30.551726 | orchestrator | 2025-05-19 19:49:30 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:30.553392 | orchestrator | 2025-05-19 19:49:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:30.559443 | orchestrator | 2025-05-19 19:49:30 | INFO  | Task 677fdd63-0fab-44f5-96d8-fc3658f5061b is in state SUCCESS 2025-05-19 19:49:30.561367 | orchestrator | 2025-05-19 19:49:30.561427 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-19 19:49:30.561451 | orchestrator | 2025-05-19 19:49:30.561469 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-19 19:49:30.561490 | orchestrator | 2025-05-19 19:49:30.561507 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-19 19:49:30.561526 | orchestrator | Monday 19 May 2025 19:35:16 +0000 (0:00:01.906) 0:00:01.906 ************ 2025-05-19 19:49:30.561546 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.561568 | orchestrator | 2025-05-19 19:49:30.561769 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-19 19:49:30.561836 | orchestrator | Monday 19 May 2025 19:35:18 +0000 (0:00:01.241) 0:00:03.147 ************ 2025-05-19 19:49:30.561858 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.561915 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 19:49:30.561936 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 19:49:30.561955 | orchestrator | 2025-05-19 19:49:30.561973 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-19 19:49:30.561989 | orchestrator | Monday 19 May 2025 19:35:18 +0000 (0:00:00.521) 0:00:03.669 ************ 2025-05-19 19:49:30.562012 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.562254 | orchestrator | 2025-05-19 19:49:30.562302 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-19 19:49:30.562367 | orchestrator | Monday 19 May 2025 19:35:19 +0000 (0:00:01.163) 0:00:04.833 ************ 2025-05-19 19:49:30.562382 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.562480 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.562493 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.562504 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.562515 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.562537 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.562548 | orchestrator | 2025-05-19 19:49:30.562559 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-19 19:49:30.562570 | orchestrator | Monday 19 May 2025 19:35:21 +0000 (0:00:01.151) 0:00:05.985 ************ 2025-05-19 19:49:30.562581 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.562669 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.562681 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.562692 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.562703 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.562713 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.562756 | orchestrator | 2025-05-19 19:49:30.562773 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-19 19:49:30.562791 | orchestrator | Monday 19 May 2025 19:35:21 +0000 (0:00:00.666) 0:00:06.652 ************ 2025-05-19 19:49:30.562802 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.562813 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.562823 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.562834 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.562845 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.562855 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.562866 | orchestrator | 2025-05-19 19:49:30.562877 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-19 19:49:30.562888 | orchestrator | Monday 19 May 2025 19:35:22 +0000 (0:00:01.287) 0:00:07.939 ************ 2025-05-19 19:49:30.562899 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.562910 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.562920 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.562931 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.562941 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.562953 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.562964 | orchestrator | 2025-05-19 19:49:30.562974 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-19 19:49:30.562986 | orchestrator | Monday 19 May 2025 19:35:24 +0000 (0:00:01.209) 0:00:09.148 ************ 2025-05-19 19:49:30.562997 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.563007 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.563018 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.563029 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.563040 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.563051 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.563061 | orchestrator | 2025-05-19 19:49:30.563072 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-19 19:49:30.563098 | orchestrator | Monday 19 May 2025 19:35:25 +0000 (0:00:01.052) 0:00:10.200 ************ 2025-05-19 19:49:30.563110 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.563121 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.563147 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.563158 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.563169 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.563180 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.563191 | orchestrator | 2025-05-19 19:49:30.563202 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 19:49:30.563213 | orchestrator | Monday 19 May 2025 19:35:26 +0000 (0:00:01.188) 0:00:11.388 ************ 2025-05-19 19:49:30.563224 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.563236 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.563247 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.563258 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.563269 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.563279 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.563290 | orchestrator | 2025-05-19 19:49:30.563301 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 19:49:30.563312 | orchestrator | Monday 19 May 2025 19:35:27 +0000 (0:00:00.752) 0:00:12.141 ************ 2025-05-19 19:49:30.563347 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.563358 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.563369 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.563433 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.563444 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.563455 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.563619 | orchestrator | 2025-05-19 19:49:30.563649 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 19:49:30.563661 | orchestrator | Monday 19 May 2025 19:35:28 +0000 (0:00:01.062) 0:00:13.203 ************ 2025-05-19 19:49:30.563672 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.563683 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:49:30.563694 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:49:30.563705 | orchestrator | 2025-05-19 19:49:30.563715 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-19 19:49:30.563726 | orchestrator | Monday 19 May 2025 19:35:29 +0000 (0:00:00.820) 0:00:14.023 ************ 2025-05-19 19:49:30.563737 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.563747 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.563758 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.563769 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.563779 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.563790 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.563819 | orchestrator | 2025-05-19 19:49:30.563831 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-19 19:49:30.563841 | orchestrator | Monday 19 May 2025 19:35:30 +0000 (0:00:01.895) 0:00:15.919 ************ 2025-05-19 19:49:30.563852 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.563863 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:49:30.563874 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:49:30.563898 | orchestrator | 2025-05-19 19:49:30.563910 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-19 19:49:30.563921 | orchestrator | Monday 19 May 2025 19:35:34 +0000 (0:00:03.098) 0:00:19.018 ************ 2025-05-19 19:49:30.563931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.563942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.563953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.563993 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.564005 | orchestrator | 2025-05-19 19:49:30.564016 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-19 19:49:30.564027 | orchestrator | Monday 19 May 2025 19:35:34 +0000 (0:00:00.576) 0:00:19.595 ************ 2025-05-19 19:49:30.564050 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564065 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564076 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564087 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.564098 | orchestrator | 2025-05-19 19:49:30.564109 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-19 19:49:30.564120 | orchestrator | Monday 19 May 2025 19:35:35 +0000 (0:00:00.988) 0:00:20.583 ************ 2025-05-19 19:49:30.564133 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564156 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564167 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564178 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.564189 | orchestrator | 2025-05-19 19:49:30.564200 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-19 19:49:30.564221 | orchestrator | Monday 19 May 2025 19:35:35 +0000 (0:00:00.255) 0:00:20.839 ************ 2025-05-19 19:49:30.564235 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 19:35:31.867590', 'end': '2025-05-19 19:35:32.162478', 'delta': '0:00:00.294888', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564251 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 19:35:32.711101', 'end': '2025-05-19 19:35:32.993435', 'delta': '0:00:00.282334', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564271 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 19:35:33.634423', 'end': '2025-05-19 19:35:33.903564', 'delta': '0:00:00.269141', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 19:49:30.564283 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.564294 | orchestrator | 2025-05-19 19:49:30.564305 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-19 19:49:30.564354 | orchestrator | Monday 19 May 2025 19:35:36 +0000 (0:00:00.201) 0:00:21.040 ************ 2025-05-19 19:49:30.564367 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.564377 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.564388 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.564478 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.564492 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.564504 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.564522 | orchestrator | 2025-05-19 19:49:30.564743 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-19 19:49:30.564756 | orchestrator | Monday 19 May 2025 19:35:38 +0000 (0:00:02.639) 0:00:23.680 ************ 2025-05-19 19:49:30.564767 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.564777 | orchestrator | 2025-05-19 19:49:30.564788 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-19 19:49:30.564799 | orchestrator | Monday 19 May 2025 19:35:39 +0000 (0:00:01.017) 0:00:24.698 ************ 2025-05-19 19:49:30.564810 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.564821 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.564831 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.564842 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.564852 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.564863 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.564873 | orchestrator | 2025-05-19 19:49:30.564894 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-19 19:49:30.564905 | orchestrator | Monday 19 May 2025 19:35:41 +0000 (0:00:01.332) 0:00:26.031 ************ 2025-05-19 19:49:30.564916 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.564926 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.564937 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.564947 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.564958 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.564968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.564979 | orchestrator | 2025-05-19 19:49:30.564990 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-19 19:49:30.565000 | orchestrator | Monday 19 May 2025 19:35:43 +0000 (0:00:02.316) 0:00:28.347 ************ 2025-05-19 19:49:30.565023 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565034 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.565045 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.565055 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.565066 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.565076 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.565117 | orchestrator | 2025-05-19 19:49:30.565129 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-19 19:49:30.565150 | orchestrator | Monday 19 May 2025 19:35:44 +0000 (0:00:01.265) 0:00:29.613 ************ 2025-05-19 19:49:30.565170 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565181 | orchestrator | 2025-05-19 19:49:30.565192 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-19 19:49:30.565203 | orchestrator | Monday 19 May 2025 19:35:45 +0000 (0:00:00.635) 0:00:30.249 ************ 2025-05-19 19:49:30.565213 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565224 | orchestrator | 2025-05-19 19:49:30.565234 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-19 19:49:30.565245 | orchestrator | Monday 19 May 2025 19:35:45 +0000 (0:00:00.340) 0:00:30.589 ************ 2025-05-19 19:49:30.565255 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565266 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.565277 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.565408 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.565425 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.565503 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.565514 | orchestrator | 2025-05-19 19:49:30.565525 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-19 19:49:30.565536 | orchestrator | Monday 19 May 2025 19:35:46 +0000 (0:00:00.822) 0:00:31.411 ************ 2025-05-19 19:49:30.565546 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565557 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.565568 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.565578 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.565589 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.565599 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.565610 | orchestrator | 2025-05-19 19:49:30.565621 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-19 19:49:30.565632 | orchestrator | Monday 19 May 2025 19:35:47 +0000 (0:00:01.189) 0:00:32.600 ************ 2025-05-19 19:49:30.565642 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565653 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.565664 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.565674 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.565685 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.565695 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.565706 | orchestrator | 2025-05-19 19:49:30.565717 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-19 19:49:30.565728 | orchestrator | Monday 19 May 2025 19:35:48 +0000 (0:00:00.704) 0:00:33.305 ************ 2025-05-19 19:49:30.565738 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565749 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.565759 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.565770 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.565793 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.565803 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.565814 | orchestrator | 2025-05-19 19:49:30.565825 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 19:49:30.565836 | orchestrator | Monday 19 May 2025 19:35:49 +0000 (0:00:00.874) 0:00:34.180 ************ 2025-05-19 19:49:30.565847 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.565857 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.565868 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.565900 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.565913 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.565968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.565980 | orchestrator | 2025-05-19 19:49:30.565992 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-19 19:49:30.566003 | orchestrator | Monday 19 May 2025 19:35:50 +0000 (0:00:00.822) 0:00:35.002 ************ 2025-05-19 19:49:30.566013 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.566063 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.566083 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.566094 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.566105 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.566115 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.566126 | orchestrator | 2025-05-19 19:49:30.566167 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 19:49:30.566179 | orchestrator | Monday 19 May 2025 19:35:51 +0000 (0:00:01.123) 0:00:36.126 ************ 2025-05-19 19:49:30.566190 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.566201 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.566212 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.566222 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.566233 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.566244 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.566254 | orchestrator | 2025-05-19 19:49:30.566265 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 19:49:30.566276 | orchestrator | Monday 19 May 2025 19:35:52 +0000 (0:00:00.859) 0:00:36.985 ************ 2025-05-19 19:49:30.566423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.566931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.566951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.566965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b', 'scsi-SQEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5f9cc4b-8b29-4481-945d-7cb76299c28b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.566990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.567002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2', 'scsi-SQEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part14', 'scsi-SQEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part15', 'scsi-SQEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part16', 'scsi-SQEMU_QEMU_HARDDISK_4d43deee-0972-4c02-80df-437cbd2714e2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.567387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.567412 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.567432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6eb1ee5c--85e6--559d--849b--4772bddae6d6-osd--block--6eb1ee5c--85e6--559d--849b--4772bddae6d6', 'dm-uuid-LVM-9G3JKyLpm2eulDt8H5JQ8xao4AIZs96Pqgn8fmPhBPH2BnEBaDLaLEHZ1LYWdS5n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--702b6aa6--b3de--5669--bdb1--4e94528c6268-osd--block--702b6aa6--b3de--5669--bdb1--4e94528c6268', 'dm-uuid-LVM-EcY4Icp9gMytsTnktMDhXTZHo5reP7AzSvssmKtQTAOVlNL0xjz3tqc7e35Z2eDI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567505 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.567528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e-osd--block--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e', 'dm-uuid-LVM-9ydvbIPK60Ubi030XTiKL0ZpcekX4L92BOloCJEVVe3W1zlEjNY4qiOclzcnNn3I'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part1', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part14', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part15', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part16', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.567801 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fdf60fa--c839--55c0--9693--b393079e2a5b-osd--block--5fdf60fa--c839--55c0--9693--b393079e2a5b', 'dm-uuid-LVM-2s5sYhJPDuTuBARqueuiVIR5q5p11BvL8L7pyJcFEzFQZ45bpvYD50eX9rJ3h4d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6eb1ee5c--85e6--559d--849b--4772bddae6d6-osd--block--6eb1ee5c--85e6--559d--849b--4772bddae6d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g9Jdtg-WbpU-nvHv-zqRJ-MtkK-jX13-As6jCf', 'scsi-0QEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199', 'scsi-SQEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.567855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--702b6aa6--b3de--5669--bdb1--4e94528c6268-osd--block--702b6aa6--b3de--5669--bdb1--4e94528c6268'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z15ey2-n9T6-dJCv-uCj8-53Ow-Gryt-OnjaxQ', 'scsi-0QEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2', 'scsi-SQEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.567892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.567978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53', 'scsi-SQEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-50-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e-osd--block--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GjEcbQ-8XTh-OJxP-eit7-pJLB-al7v-dp9LyB', 'scsi-0QEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3', 'scsi-SQEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568156 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.568166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5fdf60fa--c839--55c0--9693--b393079e2a5b-osd--block--5fdf60fa--c839--55c0--9693--b393079e2a5b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-klAHA5-Nodk-mQ8E-7ylE-uJlv-uJ8W-sIEmp5', 'scsi-0QEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3', 'scsi-SQEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568176 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.568186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d', 'scsi-SQEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568211 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.568227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f4656c6e--aa1c--5ab7--9900--7160e6354d4d-osd--block--f4656c6e--aa1c--5ab7--9900--7160e6354d4d', 'dm-uuid-LVM-ZcvUkVNDJQ8ioS2hHP5OAdcnSwBf0wOOQbTLSttY0OOy3lysEgjBR6Ap5RTZT3jN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5646b4ad--081a--5fe7--ab17--c0ecc5756623-osd--block--5646b4ad--081a--5fe7--ab17--c0ecc5756623', 'dm-uuid-LVM-I8PlVyy6GMjqNKUv5gU6nLfO3zQU2H4gcMOpber6zQpK3AHN5ZVXQcDe15mSTIUk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:49:30.568425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f4656c6e--aa1c--5ab7--9900--7160e6354d4d-osd--block--f4656c6e--aa1c--5ab7--9900--7160e6354d4d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hkxRGc-a5Nm-QuIu-VeIy-flWs-37nk-FOX5q3', 'scsi-0QEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091', 'scsi-SQEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5646b4ad--081a--5fe7--ab17--c0ecc5756623-osd--block--5646b4ad--081a--5fe7--ab17--c0ecc5756623'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qScHiP-CxQ1-lQnO-87g3-L2C7-u0fI-W981ja', 'scsi-0QEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20', 'scsi-SQEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be', 'scsi-SQEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:49:30.568497 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.568507 | orchestrator | 2025-05-19 19:49:30.568517 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-19 19:49:30.568528 | orchestrator | Monday 19 May 2025 19:35:54 +0000 (0:00:02.157) 0:00:39.143 ************ 2025-05-19 19:49:30.568538 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.568547 | orchestrator | 2025-05-19 19:49:30.568557 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-19 19:49:30.568607 | orchestrator | Monday 19 May 2025 19:35:54 +0000 (0:00:00.379) 0:00:39.522 ************ 2025-05-19 19:49:30.568617 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.568627 | orchestrator | 2025-05-19 19:49:30.568636 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-19 19:49:30.568646 | orchestrator | Monday 19 May 2025 19:35:54 +0000 (0:00:00.168) 0:00:39.691 ************ 2025-05-19 19:49:30.568655 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.568665 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.568674 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.568684 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.568693 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.568703 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.568744 | orchestrator | 2025-05-19 19:49:30.568754 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-19 19:49:30.568764 | orchestrator | Monday 19 May 2025 19:35:55 +0000 (0:00:01.034) 0:00:40.725 ************ 2025-05-19 19:49:30.568774 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.568783 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.568793 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.568803 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.568812 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.568822 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.568832 | orchestrator | 2025-05-19 19:49:30.568841 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-19 19:49:30.568851 | orchestrator | Monday 19 May 2025 19:35:58 +0000 (0:00:02.321) 0:00:43.047 ************ 2025-05-19 19:49:30.568860 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.568870 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.568879 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.568889 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.568898 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.568907 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.568916 | orchestrator | 2025-05-19 19:49:30.568925 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-19 19:49:30.568933 | orchestrator | Monday 19 May 2025 19:35:58 +0000 (0:00:00.662) 0:00:43.709 ************ 2025-05-19 19:49:30.568940 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.568948 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.568962 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.568970 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.568978 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.568985 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.568993 | orchestrator | 2025-05-19 19:49:30.569001 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-19 19:49:30.569009 | orchestrator | Monday 19 May 2025 19:35:59 +0000 (0:00:00.777) 0:00:44.487 ************ 2025-05-19 19:49:30.569017 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.569024 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.569032 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.569040 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.569047 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.569055 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.569063 | orchestrator | 2025-05-19 19:49:30.569071 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-19 19:49:30.569083 | orchestrator | Monday 19 May 2025 19:36:00 +0000 (0:00:00.627) 0:00:45.114 ************ 2025-05-19 19:49:30.569091 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.569098 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.569106 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.569114 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.569121 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.569129 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.569137 | orchestrator | 2025-05-19 19:49:30.569145 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-19 19:49:30.569153 | orchestrator | Monday 19 May 2025 19:36:01 +0000 (0:00:01.024) 0:00:46.138 ************ 2025-05-19 19:49:30.569160 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.569168 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.569176 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.569183 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.569191 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.569199 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.569206 | orchestrator | 2025-05-19 19:49:30.569214 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-19 19:49:30.569222 | orchestrator | Monday 19 May 2025 19:36:01 +0000 (0:00:00.656) 0:00:46.794 ************ 2025-05-19 19:49:30.569230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.569243 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.569251 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 19:49:30.569259 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 19:49:30.569267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.569275 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 19:49:30.569282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 19:49:30.569290 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 19:49:30.569298 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.569305 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:49:30.569313 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 19:49:30.569342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:49:30.569350 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.569357 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.569365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:49:30.569373 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:49:30.569380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:49:30.569388 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:49:30.569402 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.569410 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:49:30.569418 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.569425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:49:30.569433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:49:30.569441 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.569449 | orchestrator | 2025-05-19 19:49:30.569456 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-19 19:49:30.569513 | orchestrator | Monday 19 May 2025 19:36:04 +0000 (0:00:02.644) 0:00:49.439 ************ 2025-05-19 19:49:30.569522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.569530 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 19:49:30.569538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.569546 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 19:49:30.569554 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 19:49:30.569561 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.569569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 19:49:30.569577 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 19:49:30.569585 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:49:30.569592 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.569600 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.569608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:49:30.569616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:49:30.569623 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:49:30.569652 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 19:49:30.569661 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.569669 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:49:30.569677 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.569685 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:49:30.569692 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:49:30.569700 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.569708 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:49:30.569715 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:49:30.569723 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.569731 | orchestrator | 2025-05-19 19:49:30.569739 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-19 19:49:30.569747 | orchestrator | Monday 19 May 2025 19:36:07 +0000 (0:00:02.642) 0:00:52.082 ************ 2025-05-19 19:49:30.569755 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-19 19:49:30.569763 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.569775 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-19 19:49:30.569783 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 19:49:30.569791 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-19 19:49:30.569799 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 19:49:30.569807 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-19 19:49:30.569814 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-19 19:49:30.569822 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-19 19:49:30.569830 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 19:49:30.569838 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 19:49:30.569846 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-19 19:49:30.569859 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-19 19:49:30.569867 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-19 19:49:30.569875 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 19:49:30.569882 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-19 19:49:30.569890 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-19 19:49:30.569898 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-19 19:49:30.569906 | orchestrator | 2025-05-19 19:49:30.569914 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-19 19:49:30.571238 | orchestrator | Monday 19 May 2025 19:36:11 +0000 (0:00:04.670) 0:00:56.752 ************ 2025-05-19 19:49:30.571273 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.571282 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.571290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.571298 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.571306 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 19:49:30.571314 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 19:49:30.571495 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 19:49:30.571504 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 19:49:30.571511 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 19:49:30.571519 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.571527 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 19:49:30.571534 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.571542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:49:30.571550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:49:30.571557 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:49:30.571563 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:49:30.571570 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:49:30.571576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:49:30.571583 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.571589 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.571596 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:49:30.571602 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:49:30.571609 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:49:30.571615 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.571622 | orchestrator | 2025-05-19 19:49:30.571628 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-19 19:49:30.571635 | orchestrator | Monday 19 May 2025 19:36:13 +0000 (0:00:01.324) 0:00:58.077 ************ 2025-05-19 19:49:30.571642 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.571649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.571655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.571661 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.571668 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 19:49:30.571674 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 19:49:30.571681 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 19:49:30.571687 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 19:49:30.571694 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 19:49:30.571700 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.571706 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 19:49:30.571713 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:49:30.571731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:49:30.571738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:49:30.571744 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.571751 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:49:30.571757 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.571764 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:49:30.571770 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:49:30.571777 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.571783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:49:30.571789 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:49:30.571796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:49:30.571802 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.571810 | orchestrator | 2025-05-19 19:49:30.571817 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-19 19:49:30.571833 | orchestrator | Monday 19 May 2025 19:36:14 +0000 (0:00:01.287) 0:00:59.365 ************ 2025-05-19 19:49:30.571841 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-19 19:49:30.571849 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:49:30.571858 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:49:30.571865 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:49:30.571872 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-19 19:49:30.571880 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:49:30.571887 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:49:30.571895 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:49:30.571902 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-19 19:49:30.571921 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:49:30.571929 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:49:30.571936 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:49:30.571943 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:49:30.571951 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:49:30.571958 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:49:30.571965 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.571973 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.571980 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:49:30.571988 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:49:30.571995 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:49:30.572002 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572010 | orchestrator | 2025-05-19 19:49:30.572017 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-19 19:49:30.572025 | orchestrator | Monday 19 May 2025 19:36:15 +0000 (0:00:01.126) 0:01:00.491 ************ 2025-05-19 19:49:30.572032 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.572044 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.572051 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.572059 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.572067 | orchestrator | 2025-05-19 19:49:30.572074 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.572083 | orchestrator | Monday 19 May 2025 19:36:16 +0000 (0:00:01.094) 0:01:01.585 ************ 2025-05-19 19:49:30.572090 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572097 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572104 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572111 | orchestrator | 2025-05-19 19:49:30.572118 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.572126 | orchestrator | Monday 19 May 2025 19:36:17 +0000 (0:00:00.612) 0:01:02.197 ************ 2025-05-19 19:49:30.572133 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572141 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572148 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572155 | orchestrator | 2025-05-19 19:49:30.572162 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.572170 | orchestrator | Monday 19 May 2025 19:36:17 +0000 (0:00:00.736) 0:01:02.934 ************ 2025-05-19 19:49:30.572177 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572184 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572192 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572199 | orchestrator | 2025-05-19 19:49:30.572207 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.572214 | orchestrator | Monday 19 May 2025 19:36:18 +0000 (0:00:00.730) 0:01:03.665 ************ 2025-05-19 19:49:30.572221 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.572228 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.572234 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.572241 | orchestrator | 2025-05-19 19:49:30.572247 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.572254 | orchestrator | Monday 19 May 2025 19:36:19 +0000 (0:00:00.985) 0:01:04.650 ************ 2025-05-19 19:49:30.572260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.572267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.572273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.572280 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572286 | orchestrator | 2025-05-19 19:49:30.572293 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.572299 | orchestrator | Monday 19 May 2025 19:36:20 +0000 (0:00:00.632) 0:01:05.283 ************ 2025-05-19 19:49:30.572309 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.572340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.572351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.572363 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572370 | orchestrator | 2025-05-19 19:49:30.572376 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.572383 | orchestrator | Monday 19 May 2025 19:36:21 +0000 (0:00:00.820) 0:01:06.104 ************ 2025-05-19 19:49:30.572390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.572396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.572403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.572409 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572416 | orchestrator | 2025-05-19 19:49:30.572422 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.572434 | orchestrator | Monday 19 May 2025 19:36:23 +0000 (0:00:01.897) 0:01:08.002 ************ 2025-05-19 19:49:30.572441 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.572447 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.572454 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.572460 | orchestrator | 2025-05-19 19:49:30.572467 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.572478 | orchestrator | Monday 19 May 2025 19:36:23 +0000 (0:00:00.803) 0:01:08.805 ************ 2025-05-19 19:49:30.572485 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 19:49:30.572492 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 19:49:30.572498 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 19:49:30.572505 | orchestrator | 2025-05-19 19:49:30.572511 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.572518 | orchestrator | Monday 19 May 2025 19:36:24 +0000 (0:00:01.008) 0:01:09.814 ************ 2025-05-19 19:49:30.572524 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572531 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572537 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572544 | orchestrator | 2025-05-19 19:49:30.572550 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.572557 | orchestrator | Monday 19 May 2025 19:36:25 +0000 (0:00:00.603) 0:01:10.418 ************ 2025-05-19 19:49:30.572563 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572570 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572576 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572583 | orchestrator | 2025-05-19 19:49:30.572589 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.572596 | orchestrator | Monday 19 May 2025 19:36:26 +0000 (0:00:00.727) 0:01:11.145 ************ 2025-05-19 19:49:30.572602 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.572609 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572615 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.572622 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572628 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.572635 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572641 | orchestrator | 2025-05-19 19:49:30.572648 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.572654 | orchestrator | Monday 19 May 2025 19:36:26 +0000 (0:00:00.559) 0:01:11.704 ************ 2025-05-19 19:49:30.572661 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.572668 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572674 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.572681 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572687 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.572694 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572700 | orchestrator | 2025-05-19 19:49:30.572707 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.572714 | orchestrator | Monday 19 May 2025 19:36:27 +0000 (0:00:00.546) 0:01:12.251 ************ 2025-05-19 19:49:30.572720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.572727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.572733 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.572739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.572746 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.572767 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.572773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.572780 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572786 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.572792 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.572799 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572808 | orchestrator | 2025-05-19 19:49:30.572818 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-19 19:49:30.572830 | orchestrator | Monday 19 May 2025 19:36:28 +0000 (0:00:00.906) 0:01:13.158 ************ 2025-05-19 19:49:30.572841 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.572851 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.572861 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.572872 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.572882 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.572888 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.572895 | orchestrator | 2025-05-19 19:49:30.572910 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-19 19:49:30.572917 | orchestrator | Monday 19 May 2025 19:36:28 +0000 (0:00:00.421) 0:01:13.580 ************ 2025-05-19 19:49:30.572923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.572930 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:49:30.572937 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:49:30.572943 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 19:49:30.572950 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 19:49:30.572957 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 19:49:30.572963 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 19:49:30.572969 | orchestrator | 2025-05-19 19:49:30.572976 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-19 19:49:30.572983 | orchestrator | Monday 19 May 2025 19:36:29 +0000 (0:00:00.896) 0:01:14.477 ************ 2025-05-19 19:49:30.572990 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.573001 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:49:30.573008 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:49:30.573014 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 19:49:30.573021 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 19:49:30.573028 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 19:49:30.573034 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 19:49:30.573041 | orchestrator | 2025-05-19 19:49:30.573047 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.573054 | orchestrator | Monday 19 May 2025 19:36:31 +0000 (0:00:01.717) 0:01:16.195 ************ 2025-05-19 19:49:30.573061 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.573070 | orchestrator | 2025-05-19 19:49:30.573076 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.573083 | orchestrator | Monday 19 May 2025 19:36:32 +0000 (0:00:01.387) 0:01:17.582 ************ 2025-05-19 19:49:30.573090 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.573096 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573139 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.573146 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.573152 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573159 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573165 | orchestrator | 2025-05-19 19:49:30.573172 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.573178 | orchestrator | Monday 19 May 2025 19:36:33 +0000 (0:00:00.919) 0:01:18.502 ************ 2025-05-19 19:49:30.573185 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573191 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573198 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573204 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.573211 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.573217 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.573224 | orchestrator | 2025-05-19 19:49:30.573231 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.573237 | orchestrator | Monday 19 May 2025 19:36:34 +0000 (0:00:01.234) 0:01:19.737 ************ 2025-05-19 19:49:30.573244 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573250 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573257 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573263 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.573270 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.573276 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.573283 | orchestrator | 2025-05-19 19:49:30.573290 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.573296 | orchestrator | Monday 19 May 2025 19:36:35 +0000 (0:00:01.183) 0:01:20.921 ************ 2025-05-19 19:49:30.573303 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573309 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573333 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573340 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.573349 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.573360 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.573371 | orchestrator | 2025-05-19 19:49:30.573383 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.573390 | orchestrator | Monday 19 May 2025 19:36:36 +0000 (0:00:00.966) 0:01:21.887 ************ 2025-05-19 19:49:30.573396 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.573403 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.573409 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573416 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573422 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573429 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.573435 | orchestrator | 2025-05-19 19:49:30.573442 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.573449 | orchestrator | Monday 19 May 2025 19:36:37 +0000 (0:00:01.007) 0:01:22.894 ************ 2025-05-19 19:49:30.573455 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573462 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573468 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573475 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573481 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573492 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573499 | orchestrator | 2025-05-19 19:49:30.573506 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.573512 | orchestrator | Monday 19 May 2025 19:36:38 +0000 (0:00:00.514) 0:01:23.408 ************ 2025-05-19 19:49:30.573519 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573525 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573532 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573538 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573545 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573551 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573563 | orchestrator | 2025-05-19 19:49:30.573570 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.573576 | orchestrator | Monday 19 May 2025 19:36:39 +0000 (0:00:00.742) 0:01:24.150 ************ 2025-05-19 19:49:30.573583 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573589 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573595 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573602 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573612 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573624 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573635 | orchestrator | 2025-05-19 19:49:30.573643 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.573649 | orchestrator | Monday 19 May 2025 19:36:39 +0000 (0:00:00.600) 0:01:24.750 ************ 2025-05-19 19:49:30.573661 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573667 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573674 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573681 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573687 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573694 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573702 | orchestrator | 2025-05-19 19:49:30.573714 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.573725 | orchestrator | Monday 19 May 2025 19:36:40 +0000 (0:00:00.914) 0:01:25.665 ************ 2025-05-19 19:49:30.573733 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573739 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573746 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573752 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573758 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573765 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573771 | orchestrator | 2025-05-19 19:49:30.573778 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.573785 | orchestrator | Monday 19 May 2025 19:36:41 +0000 (0:00:00.583) 0:01:26.249 ************ 2025-05-19 19:49:30.573791 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.573797 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.573804 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.573810 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.573817 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.573823 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.573830 | orchestrator | 2025-05-19 19:49:30.573836 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.573843 | orchestrator | Monday 19 May 2025 19:36:42 +0000 (0:00:01.362) 0:01:27.611 ************ 2025-05-19 19:49:30.573849 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573856 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573862 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573869 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573875 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573881 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573888 | orchestrator | 2025-05-19 19:49:30.573894 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.573901 | orchestrator | Monday 19 May 2025 19:36:43 +0000 (0:00:00.579) 0:01:28.191 ************ 2025-05-19 19:49:30.573908 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.573914 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.573920 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.573927 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.573933 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.573940 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.573946 | orchestrator | 2025-05-19 19:49:30.573953 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.573959 | orchestrator | Monday 19 May 2025 19:36:44 +0000 (0:00:00.849) 0:01:29.041 ************ 2025-05-19 19:49:30.573971 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.573977 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.573984 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.573990 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.573997 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.574003 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.574010 | orchestrator | 2025-05-19 19:49:30.574056 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.574064 | orchestrator | Monday 19 May 2025 19:36:45 +0000 (0:00:00.991) 0:01:30.032 ************ 2025-05-19 19:49:30.574071 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574078 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574085 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574092 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.574098 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.574105 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.574111 | orchestrator | 2025-05-19 19:49:30.574118 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.574124 | orchestrator | Monday 19 May 2025 19:36:46 +0000 (0:00:01.277) 0:01:31.309 ************ 2025-05-19 19:49:30.574131 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574138 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574144 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574151 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.574157 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.574164 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.574170 | orchestrator | 2025-05-19 19:49:30.574177 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.574183 | orchestrator | Monday 19 May 2025 19:36:47 +0000 (0:00:00.746) 0:01:32.056 ************ 2025-05-19 19:49:30.574190 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574196 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574203 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574214 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574221 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574227 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574234 | orchestrator | 2025-05-19 19:49:30.574241 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.574247 | orchestrator | Monday 19 May 2025 19:36:47 +0000 (0:00:00.566) 0:01:32.623 ************ 2025-05-19 19:49:30.574254 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574261 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574267 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574274 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574281 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574287 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574294 | orchestrator | 2025-05-19 19:49:30.574301 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.574308 | orchestrator | Monday 19 May 2025 19:36:48 +0000 (0:00:00.480) 0:01:33.103 ************ 2025-05-19 19:49:30.574338 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.574346 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.574353 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.574360 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574367 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574373 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574380 | orchestrator | 2025-05-19 19:49:30.574387 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.574400 | orchestrator | Monday 19 May 2025 19:36:48 +0000 (0:00:00.662) 0:01:33.765 ************ 2025-05-19 19:49:30.574406 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.574413 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.574420 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.574427 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.574440 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.574447 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.574453 | orchestrator | 2025-05-19 19:49:30.574460 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.574467 | orchestrator | Monday 19 May 2025 19:36:49 +0000 (0:00:00.559) 0:01:34.325 ************ 2025-05-19 19:49:30.574473 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574480 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574486 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574493 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574500 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574506 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574513 | orchestrator | 2025-05-19 19:49:30.574519 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.574526 | orchestrator | Monday 19 May 2025 19:36:50 +0000 (0:00:00.698) 0:01:35.024 ************ 2025-05-19 19:49:30.574533 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574539 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574546 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574552 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574559 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574565 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574572 | orchestrator | 2025-05-19 19:49:30.574579 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.574585 | orchestrator | Monday 19 May 2025 19:36:50 +0000 (0:00:00.531) 0:01:35.555 ************ 2025-05-19 19:49:30.574592 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574599 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574605 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574612 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574619 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574625 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574632 | orchestrator | 2025-05-19 19:49:30.574638 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.574645 | orchestrator | Monday 19 May 2025 19:36:51 +0000 (0:00:00.683) 0:01:36.239 ************ 2025-05-19 19:49:30.574652 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574658 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574665 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574671 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574678 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574685 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574691 | orchestrator | 2025-05-19 19:49:30.574698 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.574704 | orchestrator | Monday 19 May 2025 19:36:51 +0000 (0:00:00.576) 0:01:36.816 ************ 2025-05-19 19:49:30.574711 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574718 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574724 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574731 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574737 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574744 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574750 | orchestrator | 2025-05-19 19:49:30.574757 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.574764 | orchestrator | Monday 19 May 2025 19:36:52 +0000 (0:00:00.803) 0:01:37.619 ************ 2025-05-19 19:49:30.574770 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574777 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574784 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574790 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574797 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574803 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574810 | orchestrator | 2025-05-19 19:49:30.574821 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.574828 | orchestrator | Monday 19 May 2025 19:36:53 +0000 (0:00:00.606) 0:01:38.225 ************ 2025-05-19 19:49:30.574835 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574841 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574848 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574855 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574861 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574868 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574874 | orchestrator | 2025-05-19 19:49:30.574881 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.574891 | orchestrator | Monday 19 May 2025 19:36:54 +0000 (0:00:01.001) 0:01:39.226 ************ 2025-05-19 19:49:30.574898 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574905 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574911 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574918 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574924 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574931 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574938 | orchestrator | 2025-05-19 19:49:30.574944 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.574951 | orchestrator | Monday 19 May 2025 19:36:55 +0000 (0:00:00.766) 0:01:39.993 ************ 2025-05-19 19:49:30.574958 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.574964 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.574971 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.574977 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.574984 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.574991 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.574997 | orchestrator | 2025-05-19 19:49:30.575004 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.575010 | orchestrator | Monday 19 May 2025 19:36:56 +0000 (0:00:00.978) 0:01:40.971 ************ 2025-05-19 19:49:30.575017 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575024 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575045 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575084 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575092 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575099 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575105 | orchestrator | 2025-05-19 19:49:30.575112 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.575118 | orchestrator | Monday 19 May 2025 19:36:56 +0000 (0:00:00.601) 0:01:41.573 ************ 2025-05-19 19:49:30.575125 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575131 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575138 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575144 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575151 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575158 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575164 | orchestrator | 2025-05-19 19:49:30.575171 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.575177 | orchestrator | Monday 19 May 2025 19:36:57 +0000 (0:00:00.874) 0:01:42.448 ************ 2025-05-19 19:49:30.575184 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575191 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575197 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575204 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575210 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575217 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575223 | orchestrator | 2025-05-19 19:49:30.575230 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.575242 | orchestrator | Monday 19 May 2025 19:36:58 +0000 (0:00:00.691) 0:01:43.139 ************ 2025-05-19 19:49:30.575249 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.575256 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.575263 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575269 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.575276 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.575282 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575289 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.575295 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.575301 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575308 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.575359 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.575367 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.575374 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.575381 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575387 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575394 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.575401 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.575407 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575414 | orchestrator | 2025-05-19 19:49:30.575421 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.575427 | orchestrator | Monday 19 May 2025 19:36:59 +0000 (0:00:00.996) 0:01:44.135 ************ 2025-05-19 19:49:30.575434 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-19 19:49:30.575441 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-19 19:49:30.575447 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575454 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-19 19:49:30.575461 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-19 19:49:30.575467 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575474 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-19 19:49:30.575480 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-19 19:49:30.575487 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575494 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-19 19:49:30.575501 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-19 19:49:30.575507 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575514 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-19 19:49:30.575520 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-19 19:49:30.575527 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575534 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-19 19:49:30.575540 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-19 19:49:30.575551 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575558 | orchestrator | 2025-05-19 19:49:30.575565 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.575572 | orchestrator | Monday 19 May 2025 19:37:00 +0000 (0:00:00.958) 0:01:45.094 ************ 2025-05-19 19:49:30.575578 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575585 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575592 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575598 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575605 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575611 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575618 | orchestrator | 2025-05-19 19:49:30.575625 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.575637 | orchestrator | Monday 19 May 2025 19:37:01 +0000 (0:00:00.931) 0:01:46.026 ************ 2025-05-19 19:49:30.575643 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575650 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575657 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575663 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575670 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575676 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575683 | orchestrator | 2025-05-19 19:49:30.575689 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.575702 | orchestrator | Monday 19 May 2025 19:37:01 +0000 (0:00:00.901) 0:01:46.928 ************ 2025-05-19 19:49:30.575708 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575715 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575722 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575728 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575735 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575741 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575748 | orchestrator | 2025-05-19 19:49:30.575754 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.575761 | orchestrator | Monday 19 May 2025 19:37:02 +0000 (0:00:00.860) 0:01:47.789 ************ 2025-05-19 19:49:30.575768 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575775 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575781 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575788 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575794 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575801 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575807 | orchestrator | 2025-05-19 19:49:30.575814 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.575820 | orchestrator | Monday 19 May 2025 19:37:03 +0000 (0:00:00.613) 0:01:48.402 ************ 2025-05-19 19:49:30.575827 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575833 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575840 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575847 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575853 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575860 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575866 | orchestrator | 2025-05-19 19:49:30.575873 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.575880 | orchestrator | Monday 19 May 2025 19:37:04 +0000 (0:00:00.801) 0:01:49.204 ************ 2025-05-19 19:49:30.575886 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575893 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.575899 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.575906 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.575913 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.575919 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.575926 | orchestrator | 2025-05-19 19:49:30.575932 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.575939 | orchestrator | Monday 19 May 2025 19:37:04 +0000 (0:00:00.581) 0:01:49.786 ************ 2025-05-19 19:49:30.575945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.575952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.575959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.575965 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.575972 | orchestrator | 2025-05-19 19:49:30.575978 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.575985 | orchestrator | Monday 19 May 2025 19:37:05 +0000 (0:00:00.622) 0:01:50.408 ************ 2025-05-19 19:49:30.575992 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.576003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.576010 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.576016 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576023 | orchestrator | 2025-05-19 19:49:30.576029 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.576036 | orchestrator | Monday 19 May 2025 19:37:06 +0000 (0:00:00.720) 0:01:51.129 ************ 2025-05-19 19:49:30.576043 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.576049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.576056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.576062 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576069 | orchestrator | 2025-05-19 19:49:30.576076 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.576082 | orchestrator | Monday 19 May 2025 19:37:06 +0000 (0:00:00.418) 0:01:51.548 ************ 2025-05-19 19:49:30.576089 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576095 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576102 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576109 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576115 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576122 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576128 | orchestrator | 2025-05-19 19:49:30.576135 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.576145 | orchestrator | Monday 19 May 2025 19:37:07 +0000 (0:00:00.606) 0:01:52.155 ************ 2025-05-19 19:49:30.576152 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.576158 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576165 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.576172 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576178 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.576185 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.576191 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576198 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576204 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.576210 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576217 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.576224 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576230 | orchestrator | 2025-05-19 19:49:30.576237 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.576244 | orchestrator | Monday 19 May 2025 19:37:08 +0000 (0:00:01.049) 0:01:53.205 ************ 2025-05-19 19:49:30.576250 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576257 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576263 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576270 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576276 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576283 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576289 | orchestrator | 2025-05-19 19:49:30.576301 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.576308 | orchestrator | Monday 19 May 2025 19:37:08 +0000 (0:00:00.621) 0:01:53.826 ************ 2025-05-19 19:49:30.576356 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576369 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576380 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576390 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576401 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576413 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576424 | orchestrator | 2025-05-19 19:49:30.576435 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.576455 | orchestrator | Monday 19 May 2025 19:37:09 +0000 (0:00:00.946) 0:01:54.773 ************ 2025-05-19 19:49:30.576467 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.576477 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576489 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.576495 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576502 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.576508 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576515 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.576521 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576528 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.576534 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576541 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.576547 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576557 | orchestrator | 2025-05-19 19:49:30.576568 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.576579 | orchestrator | Monday 19 May 2025 19:37:10 +0000 (0:00:00.829) 0:01:55.602 ************ 2025-05-19 19:49:30.576590 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576597 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576603 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576610 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.576617 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576623 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.576630 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576636 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.576643 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576650 | orchestrator | 2025-05-19 19:49:30.576656 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.576663 | orchestrator | Monday 19 May 2025 19:37:11 +0000 (0:00:00.940) 0:01:56.543 ************ 2025-05-19 19:49:30.576670 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.576676 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.576683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.576689 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576696 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 19:49:30.576702 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 19:49:30.576709 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 19:49:30.576715 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 19:49:30.576728 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 19:49:30.576735 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 19:49:30.576741 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.576755 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.576761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.576767 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576774 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.576780 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.576794 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.576801 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576812 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.576818 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.576825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.576831 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576838 | orchestrator | 2025-05-19 19:49:30.576844 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.576851 | orchestrator | Monday 19 May 2025 19:37:13 +0000 (0:00:01.605) 0:01:58.148 ************ 2025-05-19 19:49:30.576858 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576866 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.576879 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.576893 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.576906 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.576920 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.576933 | orchestrator | 2025-05-19 19:49:30.576946 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.576959 | orchestrator | Monday 19 May 2025 19:37:14 +0000 (0:00:01.256) 0:01:59.405 ************ 2025-05-19 19:49:30.576972 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.576986 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.577007 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.577021 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.577035 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.577048 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.577061 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.577073 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.577086 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.577099 | orchestrator | 2025-05-19 19:49:30.577113 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.577126 | orchestrator | Monday 19 May 2025 19:37:15 +0000 (0:00:01.445) 0:02:00.851 ************ 2025-05-19 19:49:30.577140 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.577154 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.577168 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.577181 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.577194 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.577207 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.577220 | orchestrator | 2025-05-19 19:49:30.577234 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.577248 | orchestrator | Monday 19 May 2025 19:37:17 +0000 (0:00:01.358) 0:02:02.209 ************ 2025-05-19 19:49:30.577260 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.577274 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.577288 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.577302 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.577338 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.577355 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.577368 | orchestrator | 2025-05-19 19:49:30.577382 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-19 19:49:30.577397 | orchestrator | Monday 19 May 2025 19:37:18 +0000 (0:00:01.338) 0:02:03.547 ************ 2025-05-19 19:49:30.577410 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.577423 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.577437 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.577452 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.577466 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.577480 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.577494 | orchestrator | 2025-05-19 19:49:30.577508 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-19 19:49:30.577523 | orchestrator | Monday 19 May 2025 19:37:20 +0000 (0:00:01.473) 0:02:05.021 ************ 2025-05-19 19:49:30.577548 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.577561 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.577574 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.577587 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.577600 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.577614 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.577628 | orchestrator | 2025-05-19 19:49:30.577641 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-19 19:49:30.577656 | orchestrator | Monday 19 May 2025 19:37:22 +0000 (0:00:02.167) 0:02:07.189 ************ 2025-05-19 19:49:30.577670 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.577685 | orchestrator | 2025-05-19 19:49:30.577699 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-19 19:49:30.577712 | orchestrator | Monday 19 May 2025 19:37:23 +0000 (0:00:01.355) 0:02:08.544 ************ 2025-05-19 19:49:30.577725 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.577739 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.577753 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.577767 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.577780 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.577793 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.577807 | orchestrator | 2025-05-19 19:49:30.577815 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-19 19:49:30.577823 | orchestrator | Monday 19 May 2025 19:37:24 +0000 (0:00:00.703) 0:02:09.248 ************ 2025-05-19 19:49:30.577831 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.577839 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.577846 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.577854 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.577861 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.577869 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.577876 | orchestrator | 2025-05-19 19:49:30.577884 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-19 19:49:30.577898 | orchestrator | Monday 19 May 2025 19:37:25 +0000 (0:00:00.851) 0:02:10.100 ************ 2025-05-19 19:49:30.577906 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 19:49:30.577914 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 19:49:30.577922 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 19:49:30.577931 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 19:49:30.577945 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 19:49:30.577958 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 19:49:30.577972 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 19:49:30.577986 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 19:49:30.577996 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 19:49:30.578009 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 19:49:30.578077 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 19:49:30.578093 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 19:49:30.578107 | orchestrator | 2025-05-19 19:49:30.578120 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-19 19:49:30.578134 | orchestrator | Monday 19 May 2025 19:37:26 +0000 (0:00:01.592) 0:02:11.692 ************ 2025-05-19 19:49:30.578159 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.578173 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.578187 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.578200 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.578213 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.578226 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.578239 | orchestrator | 2025-05-19 19:49:30.578252 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-19 19:49:30.578265 | orchestrator | Monday 19 May 2025 19:37:28 +0000 (0:00:01.995) 0:02:13.688 ************ 2025-05-19 19:49:30.578279 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.578292 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.578306 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.578342 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.578358 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.578371 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.578384 | orchestrator | 2025-05-19 19:49:30.578398 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-19 19:49:30.578411 | orchestrator | Monday 19 May 2025 19:37:29 +0000 (0:00:00.698) 0:02:14.386 ************ 2025-05-19 19:49:30.578424 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.578437 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.578451 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.578465 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.578479 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.578492 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.578505 | orchestrator | 2025-05-19 19:49:30.578518 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-19 19:49:30.578530 | orchestrator | Monday 19 May 2025 19:37:30 +0000 (0:00:00.848) 0:02:15.235 ************ 2025-05-19 19:49:30.578544 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.578553 | orchestrator | 2025-05-19 19:49:30.578561 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-19 19:49:30.578569 | orchestrator | Monday 19 May 2025 19:37:31 +0000 (0:00:01.262) 0:02:16.497 ************ 2025-05-19 19:49:30.578577 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.578585 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.578592 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.578600 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.578608 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.578615 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.578623 | orchestrator | 2025-05-19 19:49:30.578631 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-19 19:49:30.578639 | orchestrator | Monday 19 May 2025 19:39:03 +0000 (0:01:31.530) 0:03:48.028 ************ 2025-05-19 19:49:30.578647 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 19:49:30.578654 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 19:49:30.578662 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 19:49:30.578670 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.578677 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 19:49:30.578685 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 19:49:30.578693 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 19:49:30.578701 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.578708 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 19:49:30.578716 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 19:49:30.578724 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 19:49:30.578739 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.578747 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 19:49:30.578760 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 19:49:30.578767 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 19:49:30.578775 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.578783 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 19:49:30.578791 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 19:49:30.578798 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 19:49:30.578806 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.578814 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 19:49:30.578822 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 19:49:30.578830 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 19:49:30.578837 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.578845 | orchestrator | 2025-05-19 19:49:30.578853 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-19 19:49:30.578861 | orchestrator | Monday 19 May 2025 19:39:04 +0000 (0:00:00.947) 0:03:48.976 ************ 2025-05-19 19:49:30.578869 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.578900 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.578914 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.578927 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.578937 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.578945 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.578953 | orchestrator | 2025-05-19 19:49:30.578960 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-19 19:49:30.578968 | orchestrator | Monday 19 May 2025 19:39:04 +0000 (0:00:00.644) 0:03:49.620 ************ 2025-05-19 19:49:30.578980 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.578994 | orchestrator | 2025-05-19 19:49:30.579008 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-19 19:49:30.579021 | orchestrator | Monday 19 May 2025 19:39:04 +0000 (0:00:00.167) 0:03:49.787 ************ 2025-05-19 19:49:30.579031 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579039 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579047 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579055 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579063 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579070 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579078 | orchestrator | 2025-05-19 19:49:30.579086 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-19 19:49:30.579094 | orchestrator | Monday 19 May 2025 19:39:05 +0000 (0:00:00.997) 0:03:50.785 ************ 2025-05-19 19:49:30.579101 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579109 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579117 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579124 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579132 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579140 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579148 | orchestrator | 2025-05-19 19:49:30.579162 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-19 19:49:30.579175 | orchestrator | Monday 19 May 2025 19:39:06 +0000 (0:00:00.688) 0:03:51.473 ************ 2025-05-19 19:49:30.579189 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579203 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579215 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579238 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579251 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579264 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579275 | orchestrator | 2025-05-19 19:49:30.579283 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-19 19:49:30.579291 | orchestrator | Monday 19 May 2025 19:39:07 +0000 (0:00:01.081) 0:03:52.555 ************ 2025-05-19 19:49:30.579299 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.579306 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.579370 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.579381 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.579389 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.579397 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.579405 | orchestrator | 2025-05-19 19:49:30.579412 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-19 19:49:30.579420 | orchestrator | Monday 19 May 2025 19:39:09 +0000 (0:00:01.725) 0:03:54.280 ************ 2025-05-19 19:49:30.579428 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.579436 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.579443 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.579451 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.579458 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.579466 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.579473 | orchestrator | 2025-05-19 19:49:30.579481 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-19 19:49:30.579489 | orchestrator | Monday 19 May 2025 19:39:10 +0000 (0:00:00.860) 0:03:55.141 ************ 2025-05-19 19:49:30.579498 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.579507 | orchestrator | 2025-05-19 19:49:30.579515 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-19 19:49:30.579523 | orchestrator | Monday 19 May 2025 19:39:11 +0000 (0:00:01.172) 0:03:56.314 ************ 2025-05-19 19:49:30.579531 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579539 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579546 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579554 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579562 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579569 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579579 | orchestrator | 2025-05-19 19:49:30.579593 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-19 19:49:30.579613 | orchestrator | Monday 19 May 2025 19:39:11 +0000 (0:00:00.556) 0:03:56.870 ************ 2025-05-19 19:49:30.579625 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579633 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579640 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579648 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579655 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579663 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579671 | orchestrator | 2025-05-19 19:49:30.579678 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-19 19:49:30.579686 | orchestrator | Monday 19 May 2025 19:39:12 +0000 (0:00:00.985) 0:03:57.855 ************ 2025-05-19 19:49:30.579694 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579701 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579709 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579716 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579724 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579731 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579739 | orchestrator | 2025-05-19 19:49:30.579747 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-19 19:49:30.579754 | orchestrator | Monday 19 May 2025 19:39:13 +0000 (0:00:00.634) 0:03:58.489 ************ 2025-05-19 19:49:30.579769 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579777 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579784 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579792 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579808 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579816 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579823 | orchestrator | 2025-05-19 19:49:30.579831 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-19 19:49:30.579839 | orchestrator | Monday 19 May 2025 19:39:14 +0000 (0:00:00.997) 0:03:59.487 ************ 2025-05-19 19:49:30.579846 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579854 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579862 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579869 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579877 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579884 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579892 | orchestrator | 2025-05-19 19:49:30.579899 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-19 19:49:30.579906 | orchestrator | Monday 19 May 2025 19:39:15 +0000 (0:00:00.714) 0:04:00.202 ************ 2025-05-19 19:49:30.579912 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579919 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579925 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579932 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579938 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.579945 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.579951 | orchestrator | 2025-05-19 19:49:30.579958 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-19 19:49:30.579964 | orchestrator | Monday 19 May 2025 19:39:16 +0000 (0:00:01.247) 0:04:01.450 ************ 2025-05-19 19:49:30.579971 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.579977 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.579984 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.579990 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.579997 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.580003 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.580009 | orchestrator | 2025-05-19 19:49:30.580016 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-19 19:49:30.580022 | orchestrator | Monday 19 May 2025 19:39:17 +0000 (0:00:00.976) 0:04:02.426 ************ 2025-05-19 19:49:30.580029 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.580035 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.580042 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.580048 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.580055 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.580061 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.580067 | orchestrator | 2025-05-19 19:49:30.580074 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.580080 | orchestrator | Monday 19 May 2025 19:39:19 +0000 (0:00:01.569) 0:04:03.996 ************ 2025-05-19 19:49:30.580087 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.580094 | orchestrator | 2025-05-19 19:49:30.580100 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-19 19:49:30.580107 | orchestrator | Monday 19 May 2025 19:39:20 +0000 (0:00:01.100) 0:04:05.097 ************ 2025-05-19 19:49:30.580113 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-19 19:49:30.580120 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-19 19:49:30.580127 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-19 19:49:30.580133 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-19 19:49:30.580140 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-19 19:49:30.580152 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-19 19:49:30.580159 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-19 19:49:30.580165 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-19 19:49:30.580172 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-19 19:49:30.580178 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-19 19:49:30.580185 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-19 19:49:30.580191 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-19 19:49:30.580198 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-19 19:49:30.580204 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-19 19:49:30.580211 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-19 19:49:30.580217 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-19 19:49:30.580224 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-19 19:49:30.580234 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-19 19:49:30.580240 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-19 19:49:30.580247 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-19 19:49:30.580253 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-19 19:49:30.580260 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-19 19:49:30.580266 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-19 19:49:30.580273 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-19 19:49:30.580279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-19 19:49:30.580286 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-19 19:49:30.580292 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-19 19:49:30.580298 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-19 19:49:30.580305 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-19 19:49:30.580311 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-19 19:49:30.580341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-19 19:49:30.580360 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-19 19:49:30.580368 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-19 19:49:30.580375 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-19 19:49:30.580381 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-19 19:49:30.580388 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-19 19:49:30.580395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-19 19:49:30.580401 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-19 19:49:30.580408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 19:49:30.580415 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-19 19:49:30.580422 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 19:49:30.580428 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-19 19:49:30.580435 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 19:49:30.580442 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-19 19:49:30.580449 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-19 19:49:30.580455 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 19:49:30.580462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 19:49:30.580468 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 19:49:30.580480 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 19:49:30.580487 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 19:49:30.580494 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 19:49:30.580500 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 19:49:30.580507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 19:49:30.580513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 19:49:30.580520 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 19:49:30.580526 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 19:49:30.580533 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 19:49:30.580539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 19:49:30.580546 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 19:49:30.580552 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 19:49:30.580559 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 19:49:30.580566 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 19:49:30.580573 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 19:49:30.580579 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 19:49:30.580586 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 19:49:30.580592 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 19:49:30.580599 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 19:49:30.580605 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 19:49:30.580612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 19:49:30.580618 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 19:49:30.580625 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 19:49:30.580631 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 19:49:30.580638 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 19:49:30.580645 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-19 19:49:30.580672 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 19:49:30.580685 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 19:49:30.580692 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-19 19:49:30.580698 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 19:49:30.580705 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 19:49:30.580712 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-19 19:49:30.580718 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 19:49:30.580724 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-19 19:49:30.580731 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-19 19:49:30.580738 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-19 19:49:30.580744 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-19 19:49:30.580751 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-19 19:49:30.580757 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-19 19:49:30.580764 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-19 19:49:30.580770 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-19 19:49:30.580787 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-19 19:49:30.580794 | orchestrator | 2025-05-19 19:49:30.580801 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.580808 | orchestrator | Monday 19 May 2025 19:39:26 +0000 (0:00:06.551) 0:04:11.648 ************ 2025-05-19 19:49:30.580814 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.580821 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.580827 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.580834 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.580841 | orchestrator | 2025-05-19 19:49:30.580848 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-19 19:49:30.580854 | orchestrator | Monday 19 May 2025 19:39:28 +0000 (0:00:01.332) 0:04:12.981 ************ 2025-05-19 19:49:30.580861 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.580868 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.580875 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.580882 | orchestrator | 2025-05-19 19:49:30.580888 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-19 19:49:30.580895 | orchestrator | Monday 19 May 2025 19:39:29 +0000 (0:00:01.445) 0:04:14.427 ************ 2025-05-19 19:49:30.580901 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.580908 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.580915 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.580922 | orchestrator | 2025-05-19 19:49:30.580929 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.580935 | orchestrator | Monday 19 May 2025 19:39:30 +0000 (0:00:01.426) 0:04:15.854 ************ 2025-05-19 19:49:30.580942 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.580949 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.580955 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.580962 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.580968 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.580975 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.580981 | orchestrator | 2025-05-19 19:49:30.580988 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.580995 | orchestrator | Monday 19 May 2025 19:39:31 +0000 (0:00:00.856) 0:04:16.710 ************ 2025-05-19 19:49:30.581002 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581008 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581015 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581021 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.581028 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.581034 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.581041 | orchestrator | 2025-05-19 19:49:30.581049 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.581055 | orchestrator | Monday 19 May 2025 19:39:32 +0000 (0:00:00.674) 0:04:17.384 ************ 2025-05-19 19:49:30.581062 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581072 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581084 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581096 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581107 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581128 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581135 | orchestrator | 2025-05-19 19:49:30.581142 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.581148 | orchestrator | Monday 19 May 2025 19:39:33 +0000 (0:00:01.009) 0:04:18.393 ************ 2025-05-19 19:49:30.581155 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581161 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581168 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581174 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581184 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581191 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581197 | orchestrator | 2025-05-19 19:49:30.581204 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.581210 | orchestrator | Monday 19 May 2025 19:39:34 +0000 (0:00:00.679) 0:04:19.072 ************ 2025-05-19 19:49:30.581217 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581224 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581230 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581236 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581243 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581249 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581256 | orchestrator | 2025-05-19 19:49:30.581262 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.581269 | orchestrator | Monday 19 May 2025 19:39:35 +0000 (0:00:01.054) 0:04:20.127 ************ 2025-05-19 19:49:30.581276 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581282 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581289 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581295 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581301 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581308 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581334 | orchestrator | 2025-05-19 19:49:30.581342 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.581353 | orchestrator | Monday 19 May 2025 19:39:35 +0000 (0:00:00.727) 0:04:20.854 ************ 2025-05-19 19:49:30.581360 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581367 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581373 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581380 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581387 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581393 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581400 | orchestrator | 2025-05-19 19:49:30.581406 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.581413 | orchestrator | Monday 19 May 2025 19:39:36 +0000 (0:00:00.912) 0:04:21.767 ************ 2025-05-19 19:49:30.581420 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581426 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581432 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581439 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581445 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581452 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581458 | orchestrator | 2025-05-19 19:49:30.581465 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.581471 | orchestrator | Monday 19 May 2025 19:39:37 +0000 (0:00:00.656) 0:04:22.424 ************ 2025-05-19 19:49:30.581478 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581484 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581491 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581497 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.581504 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.581510 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.581525 | orchestrator | 2025-05-19 19:49:30.581532 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.581538 | orchestrator | Monday 19 May 2025 19:39:39 +0000 (0:00:02.493) 0:04:24.918 ************ 2025-05-19 19:49:30.581545 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581551 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581558 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581564 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.581570 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.581577 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.581584 | orchestrator | 2025-05-19 19:49:30.581590 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.581597 | orchestrator | Monday 19 May 2025 19:39:40 +0000 (0:00:00.687) 0:04:25.605 ************ 2025-05-19 19:49:30.581604 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.581610 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.581617 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581623 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.581629 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.581636 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581642 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.581649 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.581655 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581662 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.581668 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.581674 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581681 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.581687 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.581694 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581700 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.581707 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.581713 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581720 | orchestrator | 2025-05-19 19:49:30.581726 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.581733 | orchestrator | Monday 19 May 2025 19:39:41 +0000 (0:00:01.084) 0:04:26.690 ************ 2025-05-19 19:49:30.581739 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-19 19:49:30.581746 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-19 19:49:30.581752 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581759 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-19 19:49:30.581765 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-19 19:49:30.581772 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581778 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-19 19:49:30.581785 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-19 19:49:30.581795 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581802 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-19 19:49:30.581808 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-19 19:49:30.581815 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-19 19:49:30.581821 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-19 19:49:30.581828 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-19 19:49:30.581835 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-19 19:49:30.581841 | orchestrator | 2025-05-19 19:49:30.581848 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.581854 | orchestrator | Monday 19 May 2025 19:39:42 +0000 (0:00:00.775) 0:04:27.466 ************ 2025-05-19 19:49:30.581861 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581873 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581879 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581886 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.581892 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.581899 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.581905 | orchestrator | 2025-05-19 19:49:30.581912 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.581918 | orchestrator | Monday 19 May 2025 19:39:43 +0000 (0:00:01.167) 0:04:28.634 ************ 2025-05-19 19:49:30.581925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581935 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.581942 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.581948 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.581955 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.581961 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.581968 | orchestrator | 2025-05-19 19:49:30.581975 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.581982 | orchestrator | Monday 19 May 2025 19:39:44 +0000 (0:00:00.703) 0:04:29.337 ************ 2025-05-19 19:49:30.581988 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.581995 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582001 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582008 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.582014 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.582188 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.582200 | orchestrator | 2025-05-19 19:49:30.582212 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.582223 | orchestrator | Monday 19 May 2025 19:39:45 +0000 (0:00:01.310) 0:04:30.648 ************ 2025-05-19 19:49:30.582230 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582237 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582243 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582250 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.582256 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.582263 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.582269 | orchestrator | 2025-05-19 19:49:30.582276 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.582282 | orchestrator | Monday 19 May 2025 19:39:46 +0000 (0:00:00.894) 0:04:31.542 ************ 2025-05-19 19:49:30.582289 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582295 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582302 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582309 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.582330 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.582338 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.582345 | orchestrator | 2025-05-19 19:49:30.582351 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.582358 | orchestrator | Monday 19 May 2025 19:39:47 +0000 (0:00:01.354) 0:04:32.897 ************ 2025-05-19 19:49:30.582365 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582371 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582378 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582384 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.582390 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.582397 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.582404 | orchestrator | 2025-05-19 19:49:30.582410 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.582417 | orchestrator | Monday 19 May 2025 19:39:49 +0000 (0:00:01.136) 0:04:34.034 ************ 2025-05-19 19:49:30.582423 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.582430 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.582443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.582450 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582456 | orchestrator | 2025-05-19 19:49:30.582463 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.582469 | orchestrator | Monday 19 May 2025 19:39:49 +0000 (0:00:00.716) 0:04:34.750 ************ 2025-05-19 19:49:30.582476 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.582482 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.582489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.582495 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582502 | orchestrator | 2025-05-19 19:49:30.582508 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.582515 | orchestrator | Monday 19 May 2025 19:39:50 +0000 (0:00:01.162) 0:04:35.913 ************ 2025-05-19 19:49:30.582522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.582528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.582534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.582541 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582547 | orchestrator | 2025-05-19 19:49:30.582554 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.582561 | orchestrator | Monday 19 May 2025 19:39:51 +0000 (0:00:00.537) 0:04:36.451 ************ 2025-05-19 19:49:30.582571 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582578 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582585 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582592 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.582598 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.582605 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.582611 | orchestrator | 2025-05-19 19:49:30.582618 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.582625 | orchestrator | Monday 19 May 2025 19:39:52 +0000 (0:00:00.954) 0:04:37.405 ************ 2025-05-19 19:49:30.582631 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.582663 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.582670 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582677 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582683 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.582690 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582696 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 19:49:30.582703 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 19:49:30.582709 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 19:49:30.582716 | orchestrator | 2025-05-19 19:49:30.582722 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.582729 | orchestrator | Monday 19 May 2025 19:39:54 +0000 (0:00:02.142) 0:04:39.548 ************ 2025-05-19 19:49:30.582735 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582817 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582830 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582837 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.582843 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.582850 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.582856 | orchestrator | 2025-05-19 19:49:30.582867 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.582878 | orchestrator | Monday 19 May 2025 19:39:55 +0000 (0:00:00.833) 0:04:40.381 ************ 2025-05-19 19:49:30.582889 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582900 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.582910 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.582921 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.582932 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.582952 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.582959 | orchestrator | 2025-05-19 19:49:30.582965 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.582972 | orchestrator | Monday 19 May 2025 19:39:56 +0000 (0:00:01.049) 0:04:41.430 ************ 2025-05-19 19:49:30.582979 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.582985 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.582992 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.582998 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.583005 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.583011 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.583018 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.583024 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.583030 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.583037 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.583043 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.583050 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.583056 | orchestrator | 2025-05-19 19:49:30.583063 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.583070 | orchestrator | Monday 19 May 2025 19:39:57 +0000 (0:00:00.897) 0:04:42.328 ************ 2025-05-19 19:49:30.583076 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.583083 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.583089 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.583096 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.583102 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.583109 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.583116 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.583122 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.583129 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.583136 | orchestrator | 2025-05-19 19:49:30.583142 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.583149 | orchestrator | Monday 19 May 2025 19:39:58 +0000 (0:00:00.910) 0:04:43.238 ************ 2025-05-19 19:49:30.583155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.583162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.583168 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.583175 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.583181 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 19:49:30.583188 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 19:49:30.583194 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 19:49:30.583201 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.583207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 19:49:30.583213 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 19:49:30.583220 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 19:49:30.583226 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.583233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.583243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.583258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.583275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.583288 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.583306 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.583336 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.583348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.583354 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.583361 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.583367 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.583374 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.583381 | orchestrator | 2025-05-19 19:49:30.583390 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.583401 | orchestrator | Monday 19 May 2025 19:39:59 +0000 (0:00:01.597) 0:04:44.836 ************ 2025-05-19 19:49:30.583412 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.583422 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.583434 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.583444 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.583455 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.583466 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.583477 | orchestrator | 2025-05-19 19:49:30.583571 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-19 19:49:30.583583 | orchestrator | Monday 19 May 2025 19:40:05 +0000 (0:00:05.290) 0:04:50.126 ************ 2025-05-19 19:49:30.583591 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.583598 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.583606 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.583613 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.583620 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.583628 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.583635 | orchestrator | 2025-05-19 19:49:30.583642 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-19 19:49:30.583650 | orchestrator | Monday 19 May 2025 19:40:06 +0000 (0:00:01.147) 0:04:51.274 ************ 2025-05-19 19:49:30.583657 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.583665 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.583672 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.583680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.583688 | orchestrator | 2025-05-19 19:49:30.583696 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-19 19:49:30.583703 | orchestrator | Monday 19 May 2025 19:40:07 +0000 (0:00:00.946) 0:04:52.220 ************ 2025-05-19 19:49:30.583711 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.583718 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.583726 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.583733 | orchestrator | 2025-05-19 19:49:30.583741 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-19 19:49:30.583748 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.583756 | orchestrator | 2025-05-19 19:49:30.583763 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-19 19:49:30.583771 | orchestrator | Monday 19 May 2025 19:40:08 +0000 (0:00:00.949) 0:04:53.170 ************ 2025-05-19 19:49:30.583778 | orchestrator | 2025-05-19 19:49:30.583786 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-19 19:49:30.583794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.583801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.583809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.583817 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.583823 | orchestrator | 2025-05-19 19:49:30.583830 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-19 19:49:30.583844 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.583851 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.583857 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.583887 | orchestrator | 2025-05-19 19:49:30.583897 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-19 19:49:30.583908 | orchestrator | Monday 19 May 2025 19:40:09 +0000 (0:00:01.210) 0:04:54.380 ************ 2025-05-19 19:49:30.583918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.583929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.583940 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.583950 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.583961 | orchestrator | 2025-05-19 19:49:30.583972 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-19 19:49:30.583984 | orchestrator | Monday 19 May 2025 19:40:10 +0000 (0:00:00.891) 0:04:55.272 ************ 2025-05-19 19:49:30.583991 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.583997 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.584004 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.584011 | orchestrator | 2025-05-19 19:49:30.584017 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-19 19:49:30.584024 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584031 | orchestrator | 2025-05-19 19:49:30.584042 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-19 19:49:30.584053 | orchestrator | Monday 19 May 2025 19:40:10 +0000 (0:00:00.676) 0:04:55.948 ************ 2025-05-19 19:49:30.584064 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.584076 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.584086 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.584098 | orchestrator | 2025-05-19 19:49:30.584105 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-19 19:49:30.584112 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584118 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.584129 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.584139 | orchestrator | 2025-05-19 19:49:30.584158 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-19 19:49:30.584170 | orchestrator | Monday 19 May 2025 19:40:11 +0000 (0:00:00.611) 0:04:56.560 ************ 2025-05-19 19:49:30.584180 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.584192 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.584199 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.584206 | orchestrator | 2025-05-19 19:49:30.584212 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-19 19:49:30.584219 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584226 | orchestrator | 2025-05-19 19:49:30.584237 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-19 19:49:30.584247 | orchestrator | Monday 19 May 2025 19:40:12 +0000 (0:00:00.667) 0:04:57.227 ************ 2025-05-19 19:49:30.584257 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.584267 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.584278 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.584289 | orchestrator | 2025-05-19 19:49:30.584301 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-19 19:49:30.584312 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584373 | orchestrator | 2025-05-19 19:49:30.584380 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-19 19:49:30.584387 | orchestrator | Monday 19 May 2025 19:40:12 +0000 (0:00:00.682) 0:04:57.909 ************ 2025-05-19 19:49:30.584393 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584400 | orchestrator | 2025-05-19 19:49:30.584483 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-19 19:49:30.584493 | orchestrator | Monday 19 May 2025 19:40:13 +0000 (0:00:00.121) 0:04:58.030 ************ 2025-05-19 19:49:30.584511 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.584517 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.584524 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.584531 | orchestrator | 2025-05-19 19:49:30.584537 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-19 19:49:30.584543 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584549 | orchestrator | 2025-05-19 19:49:30.584555 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-19 19:49:30.584561 | orchestrator | Monday 19 May 2025 19:40:13 +0000 (0:00:00.705) 0:04:58.736 ************ 2025-05-19 19:49:30.584567 | orchestrator | 2025-05-19 19:49:30.584573 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-19 19:49:30.584580 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584586 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.584593 | orchestrator | 2025-05-19 19:49:30.584603 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-19 19:49:30.584614 | orchestrator | Monday 19 May 2025 19:40:14 +0000 (0:00:00.700) 0:04:59.437 ************ 2025-05-19 19:49:30.584625 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.584636 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.584647 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.584656 | orchestrator | 2025-05-19 19:49:30.584668 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-19 19:49:30.584679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.584689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.584700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.584707 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584713 | orchestrator | 2025-05-19 19:49:30.584719 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-19 19:49:30.584725 | orchestrator | Monday 19 May 2025 19:40:15 +0000 (0:00:00.977) 0:05:00.414 ************ 2025-05-19 19:49:30.584731 | orchestrator | 2025-05-19 19:49:30.584737 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-19 19:49:30.584743 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584749 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.584755 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.584761 | orchestrator | 2025-05-19 19:49:30.584767 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-19 19:49:30.584773 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.584779 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.584785 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.584791 | orchestrator | 2025-05-19 19:49:30.584797 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-19 19:49:30.584803 | orchestrator | Monday 19 May 2025 19:40:16 +0000 (0:00:01.279) 0:05:01.694 ************ 2025-05-19 19:49:30.584809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.584815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.584821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.584827 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.584833 | orchestrator | 2025-05-19 19:49:30.584839 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-19 19:49:30.584845 | orchestrator | Monday 19 May 2025 19:40:17 +0000 (0:00:00.796) 0:05:02.490 ************ 2025-05-19 19:49:30.584851 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.584857 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.584863 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.584869 | orchestrator | 2025-05-19 19:49:30.584875 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-19 19:49:30.584881 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.584893 | orchestrator | 2025-05-19 19:49:30.584900 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-19 19:49:30.584906 | orchestrator | Monday 19 May 2025 19:40:18 +0000 (0:00:00.832) 0:05:03.323 ************ 2025-05-19 19:49:30.584912 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.584918 | orchestrator | 2025-05-19 19:49:30.584924 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-19 19:49:30.584936 | orchestrator | Monday 19 May 2025 19:40:18 +0000 (0:00:00.505) 0:05:03.828 ************ 2025-05-19 19:49:30.584942 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.584948 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.584954 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.584960 | orchestrator | 2025-05-19 19:49:30.584967 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-19 19:49:30.584973 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.584980 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.584991 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.585000 | orchestrator | 2025-05-19 19:49:30.585010 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-19 19:49:30.585020 | orchestrator | Monday 19 May 2025 19:40:19 +0000 (0:00:00.918) 0:05:04.747 ************ 2025-05-19 19:49:30.585031 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.585041 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.585050 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.585060 | orchestrator | 2025-05-19 19:49:30.585066 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.585073 | orchestrator | Monday 19 May 2025 19:40:20 +0000 (0:00:01.202) 0:05:05.949 ************ 2025-05-19 19:49:30.585079 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.585085 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.585091 | orchestrator | 2025-05-19 19:49:30.585097 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-19 19:49:30.585103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.585110 | orchestrator | 2025-05-19 19:49:30.585177 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.585187 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.585194 | orchestrator | 2025-05-19 19:49:30.585201 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-19 19:49:30.585208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.585215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.585222 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.585232 | orchestrator | 2025-05-19 19:49:30.585242 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-19 19:49:30.585252 | orchestrator | Monday 19 May 2025 19:40:22 +0000 (0:00:01.345) 0:05:07.295 ************ 2025-05-19 19:49:30.585262 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.585272 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.585283 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.585294 | orchestrator | 2025-05-19 19:49:30.585305 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-19 19:49:30.585332 | orchestrator | Monday 19 May 2025 19:40:23 +0000 (0:00:00.788) 0:05:08.083 ************ 2025-05-19 19:49:30.585340 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.585347 | orchestrator | 2025-05-19 19:49:30.585354 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-19 19:49:30.585361 | orchestrator | Monday 19 May 2025 19:40:23 +0000 (0:00:00.465) 0:05:08.549 ************ 2025-05-19 19:49:30.585369 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.585379 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.585404 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.585414 | orchestrator | 2025-05-19 19:49:30.585423 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-19 19:49:30.585434 | orchestrator | Monday 19 May 2025 19:40:23 +0000 (0:00:00.281) 0:05:08.830 ************ 2025-05-19 19:49:30.585440 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.585447 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.585453 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.585459 | orchestrator | 2025-05-19 19:49:30.585466 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-19 19:49:30.585472 | orchestrator | Monday 19 May 2025 19:40:25 +0000 (0:00:01.278) 0:05:10.109 ************ 2025-05-19 19:49:30.585501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.585512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.585523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.585533 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.585543 | orchestrator | 2025-05-19 19:49:30.585553 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-19 19:49:30.585562 | orchestrator | Monday 19 May 2025 19:40:25 +0000 (0:00:00.619) 0:05:10.728 ************ 2025-05-19 19:49:30.585569 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.585579 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.585590 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.585600 | orchestrator | 2025-05-19 19:49:30.585610 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-19 19:49:30.585620 | orchestrator | Monday 19 May 2025 19:40:26 +0000 (0:00:00.325) 0:05:11.054 ************ 2025-05-19 19:49:30.585630 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.585640 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.585651 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.585661 | orchestrator | 2025-05-19 19:49:30.585671 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-19 19:49:30.585680 | orchestrator | Monday 19 May 2025 19:40:26 +0000 (0:00:00.273) 0:05:11.328 ************ 2025-05-19 19:49:30.585686 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.585692 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.585698 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.585704 | orchestrator | 2025-05-19 19:49:30.585710 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-19 19:49:30.585716 | orchestrator | Monday 19 May 2025 19:40:26 +0000 (0:00:00.417) 0:05:11.746 ************ 2025-05-19 19:49:30.585722 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.585728 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.585734 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.585740 | orchestrator | 2025-05-19 19:49:30.585746 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.585752 | orchestrator | Monday 19 May 2025 19:40:27 +0000 (0:00:00.282) 0:05:12.029 ************ 2025-05-19 19:49:30.585758 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.585770 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.585776 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.585782 | orchestrator | 2025-05-19 19:49:30.585788 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-19 19:49:30.585794 | orchestrator | 2025-05-19 19:49:30.585800 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.585806 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:01.973) 0:05:14.002 ************ 2025-05-19 19:49:30.585813 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.585819 | orchestrator | 2025-05-19 19:49:30.585825 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.585831 | orchestrator | Monday 19 May 2025 19:40:29 +0000 (0:00:00.915) 0:05:14.917 ************ 2025-05-19 19:49:30.585843 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.585849 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.585855 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.585861 | orchestrator | 2025-05-19 19:49:30.585867 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.585873 | orchestrator | Monday 19 May 2025 19:40:30 +0000 (0:00:00.796) 0:05:15.714 ************ 2025-05-19 19:49:30.585879 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.585885 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.585957 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.585966 | orchestrator | 2025-05-19 19:49:30.585972 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.585979 | orchestrator | Monday 19 May 2025 19:40:31 +0000 (0:00:00.438) 0:05:16.152 ************ 2025-05-19 19:49:30.585985 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.585991 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.585997 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586003 | orchestrator | 2025-05-19 19:49:30.586009 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.586039 | orchestrator | Monday 19 May 2025 19:40:31 +0000 (0:00:00.670) 0:05:16.823 ************ 2025-05-19 19:49:30.586051 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586062 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586072 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586082 | orchestrator | 2025-05-19 19:49:30.586092 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.586103 | orchestrator | Monday 19 May 2025 19:40:32 +0000 (0:00:00.380) 0:05:17.203 ************ 2025-05-19 19:49:30.586109 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.586115 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.586121 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.586127 | orchestrator | 2025-05-19 19:49:30.586133 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.586139 | orchestrator | Monday 19 May 2025 19:40:33 +0000 (0:00:00.751) 0:05:17.955 ************ 2025-05-19 19:49:30.586145 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586151 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586157 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586164 | orchestrator | 2025-05-19 19:49:30.586170 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.586176 | orchestrator | Monday 19 May 2025 19:40:33 +0000 (0:00:00.378) 0:05:18.333 ************ 2025-05-19 19:49:30.586182 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586188 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586194 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586200 | orchestrator | 2025-05-19 19:49:30.586206 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.586213 | orchestrator | Monday 19 May 2025 19:40:34 +0000 (0:00:00.623) 0:05:18.957 ************ 2025-05-19 19:49:30.586219 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586228 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586238 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586249 | orchestrator | 2025-05-19 19:49:30.586259 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.586266 | orchestrator | Monday 19 May 2025 19:40:34 +0000 (0:00:00.303) 0:05:19.260 ************ 2025-05-19 19:49:30.586272 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586278 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586284 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586290 | orchestrator | 2025-05-19 19:49:30.586296 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.586302 | orchestrator | Monday 19 May 2025 19:40:34 +0000 (0:00:00.276) 0:05:19.536 ************ 2025-05-19 19:49:30.586308 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586347 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586355 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586361 | orchestrator | 2025-05-19 19:49:30.586367 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.586373 | orchestrator | Monday 19 May 2025 19:40:34 +0000 (0:00:00.285) 0:05:19.821 ************ 2025-05-19 19:49:30.586379 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.586385 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.586391 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.586397 | orchestrator | 2025-05-19 19:49:30.586403 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.586409 | orchestrator | Monday 19 May 2025 19:40:35 +0000 (0:00:00.983) 0:05:20.805 ************ 2025-05-19 19:49:30.586415 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586421 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586427 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586433 | orchestrator | 2025-05-19 19:49:30.586440 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.586446 | orchestrator | Monday 19 May 2025 19:40:36 +0000 (0:00:00.342) 0:05:21.148 ************ 2025-05-19 19:49:30.586452 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.586458 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.586464 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.586470 | orchestrator | 2025-05-19 19:49:30.586476 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.586486 | orchestrator | Monday 19 May 2025 19:40:36 +0000 (0:00:00.342) 0:05:21.491 ************ 2025-05-19 19:49:30.586492 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586498 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586504 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586510 | orchestrator | 2025-05-19 19:49:30.586516 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.586522 | orchestrator | Monday 19 May 2025 19:40:37 +0000 (0:00:00.514) 0:05:22.005 ************ 2025-05-19 19:49:30.586528 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586534 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586540 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586546 | orchestrator | 2025-05-19 19:49:30.586552 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.586558 | orchestrator | Monday 19 May 2025 19:40:37 +0000 (0:00:00.297) 0:05:22.302 ************ 2025-05-19 19:49:30.586564 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586570 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586576 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586583 | orchestrator | 2025-05-19 19:49:30.586590 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.586597 | orchestrator | Monday 19 May 2025 19:40:37 +0000 (0:00:00.337) 0:05:22.640 ************ 2025-05-19 19:49:30.586604 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586611 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586678 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586688 | orchestrator | 2025-05-19 19:49:30.586695 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.586702 | orchestrator | Monday 19 May 2025 19:40:37 +0000 (0:00:00.292) 0:05:22.932 ************ 2025-05-19 19:49:30.586709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586715 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586723 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586730 | orchestrator | 2025-05-19 19:49:30.586737 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.586744 | orchestrator | Monday 19 May 2025 19:40:38 +0000 (0:00:00.298) 0:05:23.230 ************ 2025-05-19 19:49:30.586751 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.586757 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.586771 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.586777 | orchestrator | 2025-05-19 19:49:30.586784 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.586792 | orchestrator | Monday 19 May 2025 19:40:38 +0000 (0:00:00.505) 0:05:23.736 ************ 2025-05-19 19:49:30.586799 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.586806 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.586813 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.586821 | orchestrator | 2025-05-19 19:49:30.586831 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.586840 | orchestrator | Monday 19 May 2025 19:40:39 +0000 (0:00:00.319) 0:05:24.055 ************ 2025-05-19 19:49:30.586851 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586862 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586871 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586882 | orchestrator | 2025-05-19 19:49:30.586892 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.586902 | orchestrator | Monday 19 May 2025 19:40:39 +0000 (0:00:00.317) 0:05:24.373 ************ 2025-05-19 19:49:30.586912 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586922 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586930 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586936 | orchestrator | 2025-05-19 19:49:30.586942 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.586949 | orchestrator | Monday 19 May 2025 19:40:40 +0000 (0:00:00.637) 0:05:25.010 ************ 2025-05-19 19:49:30.586955 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586961 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.586967 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.586973 | orchestrator | 2025-05-19 19:49:30.586979 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.586985 | orchestrator | Monday 19 May 2025 19:40:40 +0000 (0:00:00.369) 0:05:25.380 ************ 2025-05-19 19:49:30.586991 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.586997 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587003 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587009 | orchestrator | 2025-05-19 19:49:30.587034 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.587040 | orchestrator | Monday 19 May 2025 19:40:40 +0000 (0:00:00.359) 0:05:25.740 ************ 2025-05-19 19:49:30.587046 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587052 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587058 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587064 | orchestrator | 2025-05-19 19:49:30.587070 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.587077 | orchestrator | Monday 19 May 2025 19:40:41 +0000 (0:00:00.373) 0:05:26.113 ************ 2025-05-19 19:49:30.587083 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587089 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587095 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587102 | orchestrator | 2025-05-19 19:49:30.587108 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.587114 | orchestrator | Monday 19 May 2025 19:40:41 +0000 (0:00:00.644) 0:05:26.757 ************ 2025-05-19 19:49:30.587120 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587126 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587132 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587138 | orchestrator | 2025-05-19 19:49:30.587144 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.587151 | orchestrator | Monday 19 May 2025 19:40:42 +0000 (0:00:00.357) 0:05:27.115 ************ 2025-05-19 19:49:30.587158 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587164 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587169 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587182 | orchestrator | 2025-05-19 19:49:30.587193 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.587199 | orchestrator | Monday 19 May 2025 19:40:42 +0000 (0:00:00.377) 0:05:27.492 ************ 2025-05-19 19:49:30.587205 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587211 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587217 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587223 | orchestrator | 2025-05-19 19:49:30.587230 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.587236 | orchestrator | Monday 19 May 2025 19:40:42 +0000 (0:00:00.343) 0:05:27.836 ************ 2025-05-19 19:49:30.587242 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587248 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587254 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587260 | orchestrator | 2025-05-19 19:49:30.587266 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.587272 | orchestrator | Monday 19 May 2025 19:40:43 +0000 (0:00:00.583) 0:05:28.420 ************ 2025-05-19 19:49:30.587278 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587284 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587290 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587296 | orchestrator | 2025-05-19 19:49:30.587303 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.587388 | orchestrator | Monday 19 May 2025 19:40:43 +0000 (0:00:00.353) 0:05:28.774 ************ 2025-05-19 19:49:30.587403 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587413 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587423 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587434 | orchestrator | 2025-05-19 19:49:30.587444 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.587454 | orchestrator | Monday 19 May 2025 19:40:44 +0000 (0:00:00.336) 0:05:29.110 ************ 2025-05-19 19:49:30.587464 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.587474 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.587483 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587490 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.587496 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.587502 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587508 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.587514 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.587520 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587526 | orchestrator | 2025-05-19 19:49:30.587532 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.587538 | orchestrator | Monday 19 May 2025 19:40:44 +0000 (0:00:00.397) 0:05:29.508 ************ 2025-05-19 19:49:30.587544 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-19 19:49:30.587551 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-19 19:49:30.587557 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587563 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-19 19:49:30.587569 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-19 19:49:30.587575 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587581 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-19 19:49:30.587587 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-19 19:49:30.587594 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587600 | orchestrator | 2025-05-19 19:49:30.587606 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.587612 | orchestrator | Monday 19 May 2025 19:40:45 +0000 (0:00:00.642) 0:05:30.150 ************ 2025-05-19 19:49:30.587625 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587631 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587637 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587643 | orchestrator | 2025-05-19 19:49:30.587649 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.587655 | orchestrator | Monday 19 May 2025 19:40:45 +0000 (0:00:00.444) 0:05:30.595 ************ 2025-05-19 19:49:30.587661 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587667 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587674 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587680 | orchestrator | 2025-05-19 19:49:30.587686 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.587692 | orchestrator | Monday 19 May 2025 19:40:46 +0000 (0:00:00.432) 0:05:31.028 ************ 2025-05-19 19:49:30.587698 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587704 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587710 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587719 | orchestrator | 2025-05-19 19:49:30.587729 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.587739 | orchestrator | Monday 19 May 2025 19:40:46 +0000 (0:00:00.433) 0:05:31.461 ************ 2025-05-19 19:49:30.587749 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587760 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587770 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587781 | orchestrator | 2025-05-19 19:49:30.587790 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.587796 | orchestrator | Monday 19 May 2025 19:40:47 +0000 (0:00:00.711) 0:05:32.173 ************ 2025-05-19 19:49:30.587803 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587809 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587815 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587821 | orchestrator | 2025-05-19 19:49:30.587831 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.587841 | orchestrator | Monday 19 May 2025 19:40:47 +0000 (0:00:00.462) 0:05:32.635 ************ 2025-05-19 19:49:30.587851 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587860 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.587870 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.587881 | orchestrator | 2025-05-19 19:49:30.587896 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.587905 | orchestrator | Monday 19 May 2025 19:40:48 +0000 (0:00:00.471) 0:05:33.107 ************ 2025-05-19 19:49:30.587911 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.587917 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.587924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.587930 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587936 | orchestrator | 2025-05-19 19:49:30.587942 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.587948 | orchestrator | Monday 19 May 2025 19:40:48 +0000 (0:00:00.523) 0:05:33.630 ************ 2025-05-19 19:49:30.587954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.587960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.587966 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.587972 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.587978 | orchestrator | 2025-05-19 19:49:30.587985 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.587991 | orchestrator | Monday 19 May 2025 19:40:49 +0000 (0:00:00.424) 0:05:34.055 ************ 2025-05-19 19:49:30.588059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.588069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.588082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.588089 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588096 | orchestrator | 2025-05-19 19:49:30.588103 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.588109 | orchestrator | Monday 19 May 2025 19:40:49 +0000 (0:00:00.741) 0:05:34.796 ************ 2025-05-19 19:49:30.588117 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588123 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588130 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588137 | orchestrator | 2025-05-19 19:49:30.588144 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.588151 | orchestrator | Monday 19 May 2025 19:40:50 +0000 (0:00:00.820) 0:05:35.617 ************ 2025-05-19 19:49:30.588158 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.588165 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588172 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.588178 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588186 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.588193 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588200 | orchestrator | 2025-05-19 19:49:30.588207 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.588214 | orchestrator | Monday 19 May 2025 19:40:51 +0000 (0:00:00.525) 0:05:36.143 ************ 2025-05-19 19:49:30.588221 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588228 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588235 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588241 | orchestrator | 2025-05-19 19:49:30.588248 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.588255 | orchestrator | Monday 19 May 2025 19:40:51 +0000 (0:00:00.384) 0:05:36.527 ************ 2025-05-19 19:49:30.588262 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588269 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588276 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588283 | orchestrator | 2025-05-19 19:49:30.588290 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.588297 | orchestrator | Monday 19 May 2025 19:40:51 +0000 (0:00:00.399) 0:05:36.927 ************ 2025-05-19 19:49:30.588304 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.588311 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588366 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.588374 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588381 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.588388 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588395 | orchestrator | 2025-05-19 19:49:30.588403 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.588410 | orchestrator | Monday 19 May 2025 19:40:53 +0000 (0:00:01.139) 0:05:38.067 ************ 2025-05-19 19:49:30.588416 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588441 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588447 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588453 | orchestrator | 2025-05-19 19:49:30.588459 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.588466 | orchestrator | Monday 19 May 2025 19:40:53 +0000 (0:00:00.397) 0:05:38.464 ************ 2025-05-19 19:49:30.588472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.588478 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.588484 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.588490 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588496 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 19:49:30.588508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 19:49:30.588515 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 19:49:30.588521 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588527 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 19:49:30.588533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 19:49:30.588539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 19:49:30.588545 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588551 | orchestrator | 2025-05-19 19:49:30.588558 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.588568 | orchestrator | Monday 19 May 2025 19:40:54 +0000 (0:00:00.635) 0:05:39.100 ************ 2025-05-19 19:49:30.588575 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588586 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588596 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588607 | orchestrator | 2025-05-19 19:49:30.588617 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.588628 | orchestrator | Monday 19 May 2025 19:40:55 +0000 (0:00:00.952) 0:05:40.052 ************ 2025-05-19 19:49:30.588638 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588648 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588659 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588669 | orchestrator | 2025-05-19 19:49:30.588680 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.588691 | orchestrator | Monday 19 May 2025 19:40:55 +0000 (0:00:00.570) 0:05:40.623 ************ 2025-05-19 19:49:30.588702 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588714 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588724 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588735 | orchestrator | 2025-05-19 19:49:30.588742 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.588748 | orchestrator | Monday 19 May 2025 19:40:56 +0000 (0:00:00.915) 0:05:41.539 ************ 2025-05-19 19:49:30.588754 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588760 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.588797 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.588804 | orchestrator | 2025-05-19 19:49:30.588810 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-19 19:49:30.588817 | orchestrator | Monday 19 May 2025 19:40:57 +0000 (0:00:00.564) 0:05:42.104 ************ 2025-05-19 19:49:30.588823 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.588829 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.588835 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.588841 | orchestrator | 2025-05-19 19:49:30.588847 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-19 19:49:30.588853 | orchestrator | Monday 19 May 2025 19:40:57 +0000 (0:00:00.715) 0:05:42.819 ************ 2025-05-19 19:49:30.588859 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.588866 | orchestrator | 2025-05-19 19:49:30.588872 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-19 19:49:30.588883 | orchestrator | Monday 19 May 2025 19:40:58 +0000 (0:00:00.700) 0:05:43.520 ************ 2025-05-19 19:49:30.588892 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.588902 | orchestrator | 2025-05-19 19:49:30.588911 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-19 19:49:30.588920 | orchestrator | Monday 19 May 2025 19:40:58 +0000 (0:00:00.164) 0:05:43.684 ************ 2025-05-19 19:49:30.588928 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 19:49:30.588937 | orchestrator | 2025-05-19 19:49:30.588947 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-19 19:49:30.588956 | orchestrator | Monday 19 May 2025 19:40:59 +0000 (0:00:00.827) 0:05:44.512 ************ 2025-05-19 19:49:30.588974 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.588983 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.588993 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589002 | orchestrator | 2025-05-19 19:49:30.589011 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-19 19:49:30.589020 | orchestrator | Monday 19 May 2025 19:41:00 +0000 (0:00:00.743) 0:05:45.256 ************ 2025-05-19 19:49:30.589029 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589038 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.589046 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589056 | orchestrator | 2025-05-19 19:49:30.589065 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-19 19:49:30.589073 | orchestrator | Monday 19 May 2025 19:41:00 +0000 (0:00:00.473) 0:05:45.729 ************ 2025-05-19 19:49:30.589082 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589090 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589098 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589105 | orchestrator | 2025-05-19 19:49:30.589118 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-19 19:49:30.589130 | orchestrator | Monday 19 May 2025 19:41:02 +0000 (0:00:01.231) 0:05:46.961 ************ 2025-05-19 19:49:30.589139 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589147 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589156 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589164 | orchestrator | 2025-05-19 19:49:30.589172 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-19 19:49:30.589180 | orchestrator | Monday 19 May 2025 19:41:03 +0000 (0:00:01.124) 0:05:48.086 ************ 2025-05-19 19:49:30.589188 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589197 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589206 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589215 | orchestrator | 2025-05-19 19:49:30.589225 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-19 19:49:30.589234 | orchestrator | Monday 19 May 2025 19:41:03 +0000 (0:00:00.756) 0:05:48.843 ************ 2025-05-19 19:49:30.589244 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589254 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.589259 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589265 | orchestrator | 2025-05-19 19:49:30.589270 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-19 19:49:30.589276 | orchestrator | Monday 19 May 2025 19:41:04 +0000 (0:00:00.706) 0:05:49.549 ************ 2025-05-19 19:49:30.589281 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.589287 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.589292 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.589297 | orchestrator | 2025-05-19 19:49:30.589303 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-19 19:49:30.589308 | orchestrator | Monday 19 May 2025 19:41:04 +0000 (0:00:00.342) 0:05:49.892 ************ 2025-05-19 19:49:30.589313 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589344 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.589349 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589361 | orchestrator | 2025-05-19 19:49:30.589366 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-19 19:49:30.589372 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:00.651) 0:05:50.543 ************ 2025-05-19 19:49:30.589377 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.589382 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.589387 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.589393 | orchestrator | 2025-05-19 19:49:30.589398 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-19 19:49:30.589403 | orchestrator | Monday 19 May 2025 19:41:05 +0000 (0:00:00.402) 0:05:50.946 ************ 2025-05-19 19:49:30.589409 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589423 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.589428 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589434 | orchestrator | 2025-05-19 19:49:30.589439 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-19 19:49:30.589445 | orchestrator | Monday 19 May 2025 19:41:06 +0000 (0:00:00.434) 0:05:51.380 ************ 2025-05-19 19:49:30.589450 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589455 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589461 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589466 | orchestrator | 2025-05-19 19:49:30.589471 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-19 19:49:30.589512 | orchestrator | Monday 19 May 2025 19:41:07 +0000 (0:00:01.266) 0:05:52.647 ************ 2025-05-19 19:49:30.589518 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.589523 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.589529 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.589534 | orchestrator | 2025-05-19 19:49:30.589539 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-19 19:49:30.589545 | orchestrator | Monday 19 May 2025 19:41:08 +0000 (0:00:00.633) 0:05:53.280 ************ 2025-05-19 19:49:30.589550 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.589556 | orchestrator | 2025-05-19 19:49:30.589561 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-19 19:49:30.589566 | orchestrator | Monday 19 May 2025 19:41:08 +0000 (0:00:00.551) 0:05:53.832 ************ 2025-05-19 19:49:30.589571 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.589577 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.589582 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.589587 | orchestrator | 2025-05-19 19:49:30.589593 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-19 19:49:30.589598 | orchestrator | Monday 19 May 2025 19:41:09 +0000 (0:00:00.334) 0:05:54.167 ************ 2025-05-19 19:49:30.589603 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.589609 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.589614 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.589619 | orchestrator | 2025-05-19 19:49:30.589625 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-19 19:49:30.589630 | orchestrator | Monday 19 May 2025 19:41:09 +0000 (0:00:00.619) 0:05:54.786 ************ 2025-05-19 19:49:30.589636 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.589642 | orchestrator | 2025-05-19 19:49:30.589647 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-19 19:49:30.589653 | orchestrator | Monday 19 May 2025 19:41:10 +0000 (0:00:00.599) 0:05:55.385 ************ 2025-05-19 19:49:30.589658 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589663 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589669 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589674 | orchestrator | 2025-05-19 19:49:30.589679 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-19 19:49:30.589685 | orchestrator | Monday 19 May 2025 19:41:11 +0000 (0:00:01.440) 0:05:56.826 ************ 2025-05-19 19:49:30.589690 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589695 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589700 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589706 | orchestrator | 2025-05-19 19:49:30.589711 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-19 19:49:30.589716 | orchestrator | Monday 19 May 2025 19:41:13 +0000 (0:00:01.187) 0:05:58.014 ************ 2025-05-19 19:49:30.589722 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589727 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589732 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589737 | orchestrator | 2025-05-19 19:49:30.589747 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-19 19:49:30.589752 | orchestrator | Monday 19 May 2025 19:41:14 +0000 (0:00:01.787) 0:05:59.802 ************ 2025-05-19 19:49:30.589757 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589763 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589768 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589773 | orchestrator | 2025-05-19 19:49:30.589779 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-19 19:49:30.589784 | orchestrator | Monday 19 May 2025 19:41:17 +0000 (0:00:02.875) 0:06:02.678 ************ 2025-05-19 19:49:30.589789 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.589795 | orchestrator | 2025-05-19 19:49:30.589800 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-19 19:49:30.589805 | orchestrator | Monday 19 May 2025 19:41:18 +0000 (0:00:00.675) 0:06:03.353 ************ 2025-05-19 19:49:30.589811 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-19 19:49:30.589816 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589822 | orchestrator | 2025-05-19 19:49:30.589827 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-19 19:49:30.589833 | orchestrator | Monday 19 May 2025 19:41:40 +0000 (0:00:21.616) 0:06:24.969 ************ 2025-05-19 19:49:30.589838 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.589847 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589853 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589858 | orchestrator | 2025-05-19 19:49:30.589863 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-19 19:49:30.589869 | orchestrator | Monday 19 May 2025 19:41:47 +0000 (0:00:07.274) 0:06:32.244 ************ 2025-05-19 19:49:30.589874 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.589879 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.589885 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.589890 | orchestrator | 2025-05-19 19:49:30.589895 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-19 19:49:30.589901 | orchestrator | Monday 19 May 2025 19:41:48 +0000 (0:00:01.386) 0:06:33.630 ************ 2025-05-19 19:49:30.589906 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.589911 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.589916 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.589922 | orchestrator | 2025-05-19 19:49:30.589927 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-19 19:49:30.589932 | orchestrator | Monday 19 May 2025 19:41:49 +0000 (0:00:00.708) 0:06:34.339 ************ 2025-05-19 19:49:30.589937 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.589943 | orchestrator | 2025-05-19 19:49:30.589949 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-19 19:49:30.589970 | orchestrator | Monday 19 May 2025 19:41:50 +0000 (0:00:00.855) 0:06:35.194 ************ 2025-05-19 19:49:30.589977 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.589982 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.589987 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.589993 | orchestrator | 2025-05-19 19:49:30.589998 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-19 19:49:30.590003 | orchestrator | Monday 19 May 2025 19:41:50 +0000 (0:00:00.399) 0:06:35.594 ************ 2025-05-19 19:49:30.590009 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.590033 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.590040 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.590046 | orchestrator | 2025-05-19 19:49:30.590051 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-19 19:49:30.590056 | orchestrator | Monday 19 May 2025 19:41:51 +0000 (0:00:01.230) 0:06:36.824 ************ 2025-05-19 19:49:30.590066 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.590072 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.590077 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.590083 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590088 | orchestrator | 2025-05-19 19:49:30.590093 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-19 19:49:30.590099 | orchestrator | Monday 19 May 2025 19:41:53 +0000 (0:00:01.269) 0:06:38.094 ************ 2025-05-19 19:49:30.590104 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.590110 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.590115 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.590120 | orchestrator | 2025-05-19 19:49:30.590126 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.590131 | orchestrator | Monday 19 May 2025 19:41:53 +0000 (0:00:00.356) 0:06:38.450 ************ 2025-05-19 19:49:30.590137 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.590142 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.590147 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.590153 | orchestrator | 2025-05-19 19:49:30.590158 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-19 19:49:30.590163 | orchestrator | 2025-05-19 19:49:30.590169 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.590174 | orchestrator | Monday 19 May 2025 19:41:55 +0000 (0:00:02.171) 0:06:40.622 ************ 2025-05-19 19:49:30.590179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.590185 | orchestrator | 2025-05-19 19:49:30.590191 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.590196 | orchestrator | Monday 19 May 2025 19:41:56 +0000 (0:00:00.836) 0:06:41.459 ************ 2025-05-19 19:49:30.590201 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.590207 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.590212 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.590217 | orchestrator | 2025-05-19 19:49:30.590223 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.590228 | orchestrator | Monday 19 May 2025 19:41:57 +0000 (0:00:00.745) 0:06:42.204 ************ 2025-05-19 19:49:30.590233 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590239 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590244 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590249 | orchestrator | 2025-05-19 19:49:30.590254 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.590260 | orchestrator | Monday 19 May 2025 19:41:57 +0000 (0:00:00.336) 0:06:42.541 ************ 2025-05-19 19:49:30.590265 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590271 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590276 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590281 | orchestrator | 2025-05-19 19:49:30.590286 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.590292 | orchestrator | Monday 19 May 2025 19:41:58 +0000 (0:00:00.647) 0:06:43.188 ************ 2025-05-19 19:49:30.590297 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590303 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590308 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590314 | orchestrator | 2025-05-19 19:49:30.590336 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.590341 | orchestrator | Monday 19 May 2025 19:41:58 +0000 (0:00:00.351) 0:06:43.540 ************ 2025-05-19 19:49:30.590347 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.590352 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.590357 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.590363 | orchestrator | 2025-05-19 19:49:30.590368 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.590378 | orchestrator | Monday 19 May 2025 19:41:59 +0000 (0:00:00.753) 0:06:44.294 ************ 2025-05-19 19:49:30.590384 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590389 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590395 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590400 | orchestrator | 2025-05-19 19:49:30.590405 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.590411 | orchestrator | Monday 19 May 2025 19:41:59 +0000 (0:00:00.337) 0:06:44.631 ************ 2025-05-19 19:49:30.590416 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590422 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590427 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590432 | orchestrator | 2025-05-19 19:49:30.590440 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.590448 | orchestrator | Monday 19 May 2025 19:42:00 +0000 (0:00:00.697) 0:06:45.329 ************ 2025-05-19 19:49:30.590457 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590465 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590480 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590490 | orchestrator | 2025-05-19 19:49:30.590498 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.590536 | orchestrator | Monday 19 May 2025 19:42:00 +0000 (0:00:00.356) 0:06:45.686 ************ 2025-05-19 19:49:30.590545 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590552 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590560 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590569 | orchestrator | 2025-05-19 19:49:30.590577 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.590667 | orchestrator | Monday 19 May 2025 19:42:01 +0000 (0:00:00.350) 0:06:46.036 ************ 2025-05-19 19:49:30.590696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590705 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590715 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590720 | orchestrator | 2025-05-19 19:49:30.590726 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.590731 | orchestrator | Monday 19 May 2025 19:42:01 +0000 (0:00:00.343) 0:06:46.379 ************ 2025-05-19 19:49:30.590737 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.590742 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.590748 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.590753 | orchestrator | 2025-05-19 19:49:30.590758 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.590764 | orchestrator | Monday 19 May 2025 19:42:02 +0000 (0:00:01.071) 0:06:47.450 ************ 2025-05-19 19:49:30.590769 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590774 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590780 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590785 | orchestrator | 2025-05-19 19:49:30.590791 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.590796 | orchestrator | Monday 19 May 2025 19:42:02 +0000 (0:00:00.332) 0:06:47.782 ************ 2025-05-19 19:49:30.590801 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.590807 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.590812 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.590817 | orchestrator | 2025-05-19 19:49:30.590822 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.590828 | orchestrator | Monday 19 May 2025 19:42:03 +0000 (0:00:00.344) 0:06:48.127 ************ 2025-05-19 19:49:30.590833 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590838 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590843 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590849 | orchestrator | 2025-05-19 19:49:30.590854 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.590865 | orchestrator | Monday 19 May 2025 19:42:03 +0000 (0:00:00.328) 0:06:48.456 ************ 2025-05-19 19:49:30.590870 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590875 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590881 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590886 | orchestrator | 2025-05-19 19:49:30.590891 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.590897 | orchestrator | Monday 19 May 2025 19:42:04 +0000 (0:00:00.623) 0:06:49.079 ************ 2025-05-19 19:49:30.590902 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590907 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590912 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590918 | orchestrator | 2025-05-19 19:49:30.590923 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.590928 | orchestrator | Monday 19 May 2025 19:42:04 +0000 (0:00:00.348) 0:06:49.428 ************ 2025-05-19 19:49:30.590933 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590939 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590944 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590949 | orchestrator | 2025-05-19 19:49:30.590954 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.590960 | orchestrator | Monday 19 May 2025 19:42:04 +0000 (0:00:00.338) 0:06:49.766 ************ 2025-05-19 19:49:30.590965 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.590971 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.590976 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.590981 | orchestrator | 2025-05-19 19:49:30.590987 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.590992 | orchestrator | Monday 19 May 2025 19:42:05 +0000 (0:00:00.336) 0:06:50.103 ************ 2025-05-19 19:49:30.590997 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.591002 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.591008 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.591013 | orchestrator | 2025-05-19 19:49:30.591018 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.591024 | orchestrator | Monday 19 May 2025 19:42:05 +0000 (0:00:00.687) 0:06:50.790 ************ 2025-05-19 19:49:30.591029 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.591034 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.591040 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.591045 | orchestrator | 2025-05-19 19:49:30.591053 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.591059 | orchestrator | Monday 19 May 2025 19:42:06 +0000 (0:00:00.421) 0:06:51.211 ************ 2025-05-19 19:49:30.591064 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591070 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591075 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591080 | orchestrator | 2025-05-19 19:49:30.591086 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.591091 | orchestrator | Monday 19 May 2025 19:42:06 +0000 (0:00:00.418) 0:06:51.629 ************ 2025-05-19 19:49:30.591096 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591102 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591107 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591112 | orchestrator | 2025-05-19 19:49:30.591117 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.591123 | orchestrator | Monday 19 May 2025 19:42:07 +0000 (0:00:00.373) 0:06:52.002 ************ 2025-05-19 19:49:30.591128 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591133 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591139 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591144 | orchestrator | 2025-05-19 19:49:30.591149 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.591155 | orchestrator | Monday 19 May 2025 19:42:07 +0000 (0:00:00.727) 0:06:52.730 ************ 2025-05-19 19:49:30.591189 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591195 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591201 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591206 | orchestrator | 2025-05-19 19:49:30.591211 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.591217 | orchestrator | Monday 19 May 2025 19:42:08 +0000 (0:00:00.418) 0:06:53.148 ************ 2025-05-19 19:49:30.591222 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591229 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591239 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591248 | orchestrator | 2025-05-19 19:49:30.591256 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.591265 | orchestrator | Monday 19 May 2025 19:42:08 +0000 (0:00:00.358) 0:06:53.506 ************ 2025-05-19 19:49:30.591276 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591285 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591294 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591304 | orchestrator | 2025-05-19 19:49:30.591310 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.591332 | orchestrator | Monday 19 May 2025 19:42:08 +0000 (0:00:00.342) 0:06:53.849 ************ 2025-05-19 19:49:30.591338 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591344 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591349 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591354 | orchestrator | 2025-05-19 19:49:30.591360 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.591365 | orchestrator | Monday 19 May 2025 19:42:09 +0000 (0:00:00.726) 0:06:54.576 ************ 2025-05-19 19:49:30.591371 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591376 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591381 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591386 | orchestrator | 2025-05-19 19:49:30.591392 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.591397 | orchestrator | Monday 19 May 2025 19:42:10 +0000 (0:00:00.394) 0:06:54.971 ************ 2025-05-19 19:49:30.591402 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591408 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591413 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591418 | orchestrator | 2025-05-19 19:49:30.591423 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.591429 | orchestrator | Monday 19 May 2025 19:42:10 +0000 (0:00:00.371) 0:06:55.343 ************ 2025-05-19 19:49:30.591434 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591439 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591445 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591450 | orchestrator | 2025-05-19 19:49:30.591455 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.591460 | orchestrator | Monday 19 May 2025 19:42:10 +0000 (0:00:00.357) 0:06:55.700 ************ 2025-05-19 19:49:30.591466 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591471 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591476 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591481 | orchestrator | 2025-05-19 19:49:30.591487 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.591492 | orchestrator | Monday 19 May 2025 19:42:11 +0000 (0:00:00.735) 0:06:56.435 ************ 2025-05-19 19:49:30.591497 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591502 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591508 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591513 | orchestrator | 2025-05-19 19:49:30.591518 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.591530 | orchestrator | Monday 19 May 2025 19:42:11 +0000 (0:00:00.396) 0:06:56.832 ************ 2025-05-19 19:49:30.591535 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.591541 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.591546 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591551 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.591557 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.591562 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591567 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.591573 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.591578 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591583 | orchestrator | 2025-05-19 19:49:30.591588 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.591597 | orchestrator | Monday 19 May 2025 19:42:12 +0000 (0:00:00.414) 0:06:57.246 ************ 2025-05-19 19:49:30.591603 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-19 19:49:30.591608 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-19 19:49:30.591614 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-19 19:49:30.591619 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591624 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-19 19:49:30.591630 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591635 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-19 19:49:30.591640 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-19 19:49:30.591645 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591651 | orchestrator | 2025-05-19 19:49:30.591656 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.591662 | orchestrator | Monday 19 May 2025 19:42:12 +0000 (0:00:00.393) 0:06:57.640 ************ 2025-05-19 19:49:30.591667 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591672 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591677 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591683 | orchestrator | 2025-05-19 19:49:30.591688 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.591715 | orchestrator | Monday 19 May 2025 19:42:13 +0000 (0:00:00.710) 0:06:58.351 ************ 2025-05-19 19:49:30.591721 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591726 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591732 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591737 | orchestrator | 2025-05-19 19:49:30.591742 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.591748 | orchestrator | Monday 19 May 2025 19:42:13 +0000 (0:00:00.351) 0:06:58.702 ************ 2025-05-19 19:49:30.591753 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591759 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591764 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591769 | orchestrator | 2025-05-19 19:49:30.591774 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.591780 | orchestrator | Monday 19 May 2025 19:42:14 +0000 (0:00:00.354) 0:06:59.057 ************ 2025-05-19 19:49:30.591785 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591790 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591795 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591801 | orchestrator | 2025-05-19 19:49:30.591806 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.591811 | orchestrator | Monday 19 May 2025 19:42:14 +0000 (0:00:00.398) 0:06:59.456 ************ 2025-05-19 19:49:30.591817 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591822 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591827 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591837 | orchestrator | 2025-05-19 19:49:30.591842 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.591848 | orchestrator | Monday 19 May 2025 19:42:15 +0000 (0:00:00.720) 0:07:00.176 ************ 2025-05-19 19:49:30.591853 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591858 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.591864 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.591869 | orchestrator | 2025-05-19 19:49:30.591874 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.591880 | orchestrator | Monday 19 May 2025 19:42:15 +0000 (0:00:00.428) 0:07:00.605 ************ 2025-05-19 19:49:30.591885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.591890 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.591896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.591904 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591916 | orchestrator | 2025-05-19 19:49:30.591928 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.591936 | orchestrator | Monday 19 May 2025 19:42:16 +0000 (0:00:00.503) 0:07:01.109 ************ 2025-05-19 19:49:30.591944 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.591953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.591961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.591969 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.591977 | orchestrator | 2025-05-19 19:49:30.591987 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.591996 | orchestrator | Monday 19 May 2025 19:42:16 +0000 (0:00:00.502) 0:07:01.612 ************ 2025-05-19 19:49:30.592005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.592014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.592024 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.592030 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592035 | orchestrator | 2025-05-19 19:49:30.592040 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.592046 | orchestrator | Monday 19 May 2025 19:42:17 +0000 (0:00:00.469) 0:07:02.082 ************ 2025-05-19 19:49:30.592051 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592056 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592062 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592067 | orchestrator | 2025-05-19 19:49:30.592072 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.592077 | orchestrator | Monday 19 May 2025 19:42:17 +0000 (0:00:00.374) 0:07:02.457 ************ 2025-05-19 19:49:30.592083 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.592088 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592093 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.592098 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592107 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.592113 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592118 | orchestrator | 2025-05-19 19:49:30.592123 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.592129 | orchestrator | Monday 19 May 2025 19:42:18 +0000 (0:00:00.918) 0:07:03.375 ************ 2025-05-19 19:49:30.592134 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592139 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592144 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592150 | orchestrator | 2025-05-19 19:49:30.592155 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.592160 | orchestrator | Monday 19 May 2025 19:42:18 +0000 (0:00:00.395) 0:07:03.770 ************ 2025-05-19 19:49:30.592171 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592176 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592181 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592186 | orchestrator | 2025-05-19 19:49:30.592192 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.592197 | orchestrator | Monday 19 May 2025 19:42:19 +0000 (0:00:00.351) 0:07:04.122 ************ 2025-05-19 19:49:30.592202 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.592207 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592213 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.592218 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592257 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.592270 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592278 | orchestrator | 2025-05-19 19:49:30.592287 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.592295 | orchestrator | Monday 19 May 2025 19:42:20 +0000 (0:00:01.182) 0:07:05.305 ************ 2025-05-19 19:49:30.592303 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592311 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592379 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592397 | orchestrator | 2025-05-19 19:49:30.592407 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.592413 | orchestrator | Monday 19 May 2025 19:42:20 +0000 (0:00:00.374) 0:07:05.680 ************ 2025-05-19 19:49:30.592418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.592423 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.592429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.592434 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 19:49:30.592444 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 19:49:30.592450 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 19:49:30.592455 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592460 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 19:49:30.592465 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 19:49:30.592471 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 19:49:30.592476 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592481 | orchestrator | 2025-05-19 19:49:30.592486 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.592492 | orchestrator | Monday 19 May 2025 19:42:21 +0000 (0:00:00.669) 0:07:06.349 ************ 2025-05-19 19:49:30.592497 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592502 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592508 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592513 | orchestrator | 2025-05-19 19:49:30.592518 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.592524 | orchestrator | Monday 19 May 2025 19:42:22 +0000 (0:00:00.920) 0:07:07.270 ************ 2025-05-19 19:49:30.592529 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592534 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592540 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592545 | orchestrator | 2025-05-19 19:49:30.592550 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.592555 | orchestrator | Monday 19 May 2025 19:42:22 +0000 (0:00:00.621) 0:07:07.892 ************ 2025-05-19 19:49:30.592561 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592566 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592571 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592576 | orchestrator | 2025-05-19 19:49:30.592582 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.592593 | orchestrator | Monday 19 May 2025 19:42:23 +0000 (0:00:00.942) 0:07:08.834 ************ 2025-05-19 19:49:30.592598 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592604 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592609 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592614 | orchestrator | 2025-05-19 19:49:30.592619 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-19 19:49:30.592625 | orchestrator | Monday 19 May 2025 19:42:24 +0000 (0:00:00.703) 0:07:09.538 ************ 2025-05-19 19:49:30.592630 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:49:30.592636 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:49:30.592641 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:49:30.592647 | orchestrator | 2025-05-19 19:49:30.592652 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-19 19:49:30.592657 | orchestrator | Monday 19 May 2025 19:42:25 +0000 (0:00:01.017) 0:07:10.555 ************ 2025-05-19 19:49:30.592663 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.592668 | orchestrator | 2025-05-19 19:49:30.592674 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-19 19:49:30.592683 | orchestrator | Monday 19 May 2025 19:42:26 +0000 (0:00:00.915) 0:07:11.471 ************ 2025-05-19 19:49:30.592689 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.592694 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.592699 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.592704 | orchestrator | 2025-05-19 19:49:30.592710 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-19 19:49:30.592716 | orchestrator | Monday 19 May 2025 19:42:27 +0000 (0:00:00.683) 0:07:12.155 ************ 2025-05-19 19:49:30.592721 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592726 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592732 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592737 | orchestrator | 2025-05-19 19:49:30.592742 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-19 19:49:30.592747 | orchestrator | Monday 19 May 2025 19:42:27 +0000 (0:00:00.624) 0:07:12.779 ************ 2025-05-19 19:49:30.592753 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 19:49:30.592758 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 19:49:30.592763 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 19:49:30.592769 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-19 19:49:30.592774 | orchestrator | 2025-05-19 19:49:30.592779 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-19 19:49:30.592785 | orchestrator | Monday 19 May 2025 19:42:36 +0000 (0:00:08.442) 0:07:21.222 ************ 2025-05-19 19:49:30.592816 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.592823 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.592828 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.592834 | orchestrator | 2025-05-19 19:49:30.592839 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-19 19:49:30.592844 | orchestrator | Monday 19 May 2025 19:42:36 +0000 (0:00:00.402) 0:07:21.625 ************ 2025-05-19 19:49:30.592850 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 19:49:30.592855 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 19:49:30.592860 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 19:49:30.592866 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:49:30.592871 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-19 19:49:30.592877 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:49:30.592882 | orchestrator | 2025-05-19 19:49:30.592887 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-19 19:49:30.592896 | orchestrator | Monday 19 May 2025 19:42:39 +0000 (0:00:02.394) 0:07:24.019 ************ 2025-05-19 19:49:30.592901 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 19:49:30.592906 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 19:49:30.592911 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 19:49:30.592915 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 19:49:30.592920 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-19 19:49:30.592925 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-19 19:49:30.592929 | orchestrator | 2025-05-19 19:49:30.592934 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-19 19:49:30.592939 | orchestrator | Monday 19 May 2025 19:42:40 +0000 (0:00:01.272) 0:07:25.292 ************ 2025-05-19 19:49:30.592944 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.592948 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.592953 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.592958 | orchestrator | 2025-05-19 19:49:30.592962 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-19 19:49:30.592967 | orchestrator | Monday 19 May 2025 19:42:41 +0000 (0:00:00.682) 0:07:25.975 ************ 2025-05-19 19:49:30.592972 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.592977 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.592981 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.592986 | orchestrator | 2025-05-19 19:49:30.592991 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-19 19:49:30.592995 | orchestrator | Monday 19 May 2025 19:42:41 +0000 (0:00:00.589) 0:07:26.564 ************ 2025-05-19 19:49:30.593000 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.593005 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.593010 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.593014 | orchestrator | 2025-05-19 19:49:30.593019 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-19 19:49:30.593024 | orchestrator | Monday 19 May 2025 19:42:41 +0000 (0:00:00.361) 0:07:26.926 ************ 2025-05-19 19:49:30.593029 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.593033 | orchestrator | 2025-05-19 19:49:30.593038 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-19 19:49:30.593043 | orchestrator | Monday 19 May 2025 19:42:42 +0000 (0:00:00.602) 0:07:27.528 ************ 2025-05-19 19:49:30.593048 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.593052 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.593057 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.593062 | orchestrator | 2025-05-19 19:49:30.593066 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-19 19:49:30.593071 | orchestrator | Monday 19 May 2025 19:42:43 +0000 (0:00:00.657) 0:07:28.185 ************ 2025-05-19 19:49:30.593076 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.593081 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.593085 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.593090 | orchestrator | 2025-05-19 19:49:30.593095 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-19 19:49:30.593100 | orchestrator | Monday 19 May 2025 19:42:43 +0000 (0:00:00.378) 0:07:28.564 ************ 2025-05-19 19:49:30.593104 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.593109 | orchestrator | 2025-05-19 19:49:30.593114 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-19 19:49:30.593122 | orchestrator | Monday 19 May 2025 19:42:44 +0000 (0:00:00.593) 0:07:29.157 ************ 2025-05-19 19:49:30.593127 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593131 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593136 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593144 | orchestrator | 2025-05-19 19:49:30.593149 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-19 19:49:30.593154 | orchestrator | Monday 19 May 2025 19:42:45 +0000 (0:00:01.562) 0:07:30.720 ************ 2025-05-19 19:49:30.593159 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593163 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593168 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593173 | orchestrator | 2025-05-19 19:49:30.593177 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-19 19:49:30.593182 | orchestrator | Monday 19 May 2025 19:42:47 +0000 (0:00:01.299) 0:07:32.020 ************ 2025-05-19 19:49:30.593187 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593192 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593196 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593201 | orchestrator | 2025-05-19 19:49:30.593206 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-19 19:49:30.593210 | orchestrator | Monday 19 May 2025 19:42:48 +0000 (0:00:01.747) 0:07:33.767 ************ 2025-05-19 19:49:30.593215 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593220 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593248 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593257 | orchestrator | 2025-05-19 19:49:30.593265 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-19 19:49:30.593273 | orchestrator | Monday 19 May 2025 19:42:51 +0000 (0:00:02.876) 0:07:36.644 ************ 2025-05-19 19:49:30.593282 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.593289 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.593297 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-19 19:49:30.593305 | orchestrator | 2025-05-19 19:49:30.593313 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-19 19:49:30.593334 | orchestrator | Monday 19 May 2025 19:42:52 +0000 (0:00:00.592) 0:07:37.237 ************ 2025-05-19 19:49:30.593339 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-19 19:49:30.593344 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-19 19:49:30.593349 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.593353 | orchestrator | 2025-05-19 19:49:30.593358 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-19 19:49:30.593363 | orchestrator | Monday 19 May 2025 19:43:05 +0000 (0:00:13.438) 0:07:50.675 ************ 2025-05-19 19:49:30.593368 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.593372 | orchestrator | 2025-05-19 19:49:30.593377 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-19 19:49:30.593382 | orchestrator | Monday 19 May 2025 19:43:07 +0000 (0:00:01.786) 0:07:52.462 ************ 2025-05-19 19:49:30.593387 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.593391 | orchestrator | 2025-05-19 19:49:30.593396 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-19 19:49:30.593401 | orchestrator | Monday 19 May 2025 19:43:07 +0000 (0:00:00.440) 0:07:52.902 ************ 2025-05-19 19:49:30.593406 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.593410 | orchestrator | 2025-05-19 19:49:30.593415 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-19 19:49:30.593420 | orchestrator | Monday 19 May 2025 19:43:08 +0000 (0:00:00.338) 0:07:53.241 ************ 2025-05-19 19:49:30.593425 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-19 19:49:30.593429 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-19 19:49:30.593434 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-19 19:49:30.593449 | orchestrator | 2025-05-19 19:49:30.593453 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-19 19:49:30.593458 | orchestrator | Monday 19 May 2025 19:43:15 +0000 (0:00:06.731) 0:07:59.973 ************ 2025-05-19 19:49:30.593463 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-19 19:49:30.593468 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-19 19:49:30.593472 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-19 19:49:30.593477 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-19 19:49:30.593482 | orchestrator | 2025-05-19 19:49:30.593486 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-19 19:49:30.593491 | orchestrator | Monday 19 May 2025 19:43:19 +0000 (0:00:04.918) 0:08:04.891 ************ 2025-05-19 19:49:30.593496 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593501 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593505 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593510 | orchestrator | 2025-05-19 19:49:30.593515 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-19 19:49:30.593519 | orchestrator | Monday 19 May 2025 19:43:20 +0000 (0:00:00.689) 0:08:05.581 ************ 2025-05-19 19:49:30.593524 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:49:30.593529 | orchestrator | 2025-05-19 19:49:30.593534 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-19 19:49:30.593538 | orchestrator | Monday 19 May 2025 19:43:21 +0000 (0:00:00.880) 0:08:06.462 ************ 2025-05-19 19:49:30.593543 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.593548 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.593556 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.593561 | orchestrator | 2025-05-19 19:49:30.593565 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-19 19:49:30.593570 | orchestrator | Monday 19 May 2025 19:43:21 +0000 (0:00:00.372) 0:08:06.834 ************ 2025-05-19 19:49:30.593575 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593580 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593584 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593589 | orchestrator | 2025-05-19 19:49:30.593594 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-19 19:49:30.593598 | orchestrator | Monday 19 May 2025 19:43:23 +0000 (0:00:01.502) 0:08:08.337 ************ 2025-05-19 19:49:30.593603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:49:30.593608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:49:30.593612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:49:30.593617 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.593622 | orchestrator | 2025-05-19 19:49:30.593627 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-19 19:49:30.593631 | orchestrator | Monday 19 May 2025 19:43:24 +0000 (0:00:00.688) 0:08:09.026 ************ 2025-05-19 19:49:30.593636 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.593641 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.593645 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.593650 | orchestrator | 2025-05-19 19:49:30.593674 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.593679 | orchestrator | Monday 19 May 2025 19:43:24 +0000 (0:00:00.431) 0:08:09.457 ************ 2025-05-19 19:49:30.593684 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.593689 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.593694 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.593699 | orchestrator | 2025-05-19 19:49:30.593703 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-19 19:49:30.593708 | orchestrator | 2025-05-19 19:49:30.593713 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.593723 | orchestrator | Monday 19 May 2025 19:43:26 +0000 (0:00:02.139) 0:08:11.596 ************ 2025-05-19 19:49:30.593728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.593733 | orchestrator | 2025-05-19 19:49:30.593737 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.593742 | orchestrator | Monday 19 May 2025 19:43:27 +0000 (0:00:00.810) 0:08:12.407 ************ 2025-05-19 19:49:30.593747 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.593752 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.593756 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.593761 | orchestrator | 2025-05-19 19:49:30.593766 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.593770 | orchestrator | Monday 19 May 2025 19:43:27 +0000 (0:00:00.324) 0:08:12.731 ************ 2025-05-19 19:49:30.593775 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.593780 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.593787 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.593795 | orchestrator | 2025-05-19 19:49:30.593801 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.593808 | orchestrator | Monday 19 May 2025 19:43:28 +0000 (0:00:00.756) 0:08:13.487 ************ 2025-05-19 19:49:30.593816 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.593823 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.593831 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.593839 | orchestrator | 2025-05-19 19:49:30.593846 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.593854 | orchestrator | Monday 19 May 2025 19:43:29 +0000 (0:00:01.179) 0:08:14.666 ************ 2025-05-19 19:49:30.593861 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.593868 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.593876 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.593883 | orchestrator | 2025-05-19 19:49:30.593891 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.593899 | orchestrator | Monday 19 May 2025 19:43:30 +0000 (0:00:00.773) 0:08:15.440 ************ 2025-05-19 19:49:30.593907 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.593915 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.593924 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.593933 | orchestrator | 2025-05-19 19:49:30.593941 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.593948 | orchestrator | Monday 19 May 2025 19:43:30 +0000 (0:00:00.364) 0:08:15.804 ************ 2025-05-19 19:49:30.593953 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.593958 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.593962 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.593967 | orchestrator | 2025-05-19 19:49:30.593972 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.593976 | orchestrator | Monday 19 May 2025 19:43:31 +0000 (0:00:00.710) 0:08:16.515 ************ 2025-05-19 19:49:30.593981 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.593986 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.593990 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.593995 | orchestrator | 2025-05-19 19:49:30.594000 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.594005 | orchestrator | Monday 19 May 2025 19:43:31 +0000 (0:00:00.359) 0:08:16.875 ************ 2025-05-19 19:49:30.594009 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594046 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594052 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594057 | orchestrator | 2025-05-19 19:49:30.594062 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.594067 | orchestrator | Monday 19 May 2025 19:43:32 +0000 (0:00:00.375) 0:08:17.250 ************ 2025-05-19 19:49:30.594077 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594082 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594089 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594097 | orchestrator | 2025-05-19 19:49:30.594109 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.594118 | orchestrator | Monday 19 May 2025 19:43:32 +0000 (0:00:00.362) 0:08:17.613 ************ 2025-05-19 19:49:30.594126 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594135 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594140 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594145 | orchestrator | 2025-05-19 19:49:30.594150 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.594154 | orchestrator | Monday 19 May 2025 19:43:33 +0000 (0:00:00.662) 0:08:18.276 ************ 2025-05-19 19:49:30.594159 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.594164 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.594169 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.594174 | orchestrator | 2025-05-19 19:49:30.594178 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.594183 | orchestrator | Monday 19 May 2025 19:43:34 +0000 (0:00:00.823) 0:08:19.100 ************ 2025-05-19 19:49:30.594188 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594192 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594197 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594204 | orchestrator | 2025-05-19 19:49:30.594211 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.594218 | orchestrator | Monday 19 May 2025 19:43:34 +0000 (0:00:00.324) 0:08:19.424 ************ 2025-05-19 19:49:30.594227 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594265 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594274 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594283 | orchestrator | 2025-05-19 19:49:30.594291 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.594299 | orchestrator | Monday 19 May 2025 19:43:34 +0000 (0:00:00.327) 0:08:19.752 ************ 2025-05-19 19:49:30.594305 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.594310 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.594331 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.594336 | orchestrator | 2025-05-19 19:49:30.594341 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.594346 | orchestrator | Monday 19 May 2025 19:43:35 +0000 (0:00:00.680) 0:08:20.433 ************ 2025-05-19 19:49:30.594350 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.594355 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.594360 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.594365 | orchestrator | 2025-05-19 19:49:30.594369 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.594374 | orchestrator | Monday 19 May 2025 19:43:35 +0000 (0:00:00.399) 0:08:20.832 ************ 2025-05-19 19:49:30.594379 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.594384 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.594388 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.594393 | orchestrator | 2025-05-19 19:49:30.594398 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.594403 | orchestrator | Monday 19 May 2025 19:43:36 +0000 (0:00:00.357) 0:08:21.190 ************ 2025-05-19 19:49:30.594407 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594412 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594417 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594421 | orchestrator | 2025-05-19 19:49:30.594426 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.594431 | orchestrator | Monday 19 May 2025 19:43:36 +0000 (0:00:00.319) 0:08:21.509 ************ 2025-05-19 19:49:30.594436 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594446 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594451 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594455 | orchestrator | 2025-05-19 19:49:30.594460 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.594465 | orchestrator | Monday 19 May 2025 19:43:37 +0000 (0:00:00.620) 0:08:22.130 ************ 2025-05-19 19:49:30.594470 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594474 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594479 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594484 | orchestrator | 2025-05-19 19:49:30.594488 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.594493 | orchestrator | Monday 19 May 2025 19:43:37 +0000 (0:00:00.384) 0:08:22.514 ************ 2025-05-19 19:49:30.594498 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.594503 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.594507 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.594512 | orchestrator | 2025-05-19 19:49:30.594517 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.594521 | orchestrator | Monday 19 May 2025 19:43:38 +0000 (0:00:00.545) 0:08:23.060 ************ 2025-05-19 19:49:30.594526 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594531 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594535 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594540 | orchestrator | 2025-05-19 19:49:30.594545 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.594549 | orchestrator | Monday 19 May 2025 19:43:38 +0000 (0:00:00.520) 0:08:23.580 ************ 2025-05-19 19:49:30.594554 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594559 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594564 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594568 | orchestrator | 2025-05-19 19:49:30.594573 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.594578 | orchestrator | Monday 19 May 2025 19:43:39 +0000 (0:00:00.734) 0:08:24.314 ************ 2025-05-19 19:49:30.594582 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594587 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594592 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594596 | orchestrator | 2025-05-19 19:49:30.594601 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.594606 | orchestrator | Monday 19 May 2025 19:43:39 +0000 (0:00:00.462) 0:08:24.777 ************ 2025-05-19 19:49:30.594611 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594615 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594620 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594625 | orchestrator | 2025-05-19 19:49:30.594629 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.594638 | orchestrator | Monday 19 May 2025 19:43:40 +0000 (0:00:00.411) 0:08:25.189 ************ 2025-05-19 19:49:30.594642 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594647 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594652 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594657 | orchestrator | 2025-05-19 19:49:30.594661 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.594666 | orchestrator | Monday 19 May 2025 19:43:40 +0000 (0:00:00.406) 0:08:25.596 ************ 2025-05-19 19:49:30.594671 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594675 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594680 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594685 | orchestrator | 2025-05-19 19:49:30.594689 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.594694 | orchestrator | Monday 19 May 2025 19:43:41 +0000 (0:00:00.559) 0:08:26.155 ************ 2025-05-19 19:49:30.594699 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594713 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594718 | orchestrator | 2025-05-19 19:49:30.594723 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.594728 | orchestrator | Monday 19 May 2025 19:43:41 +0000 (0:00:00.281) 0:08:26.437 ************ 2025-05-19 19:49:30.594733 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594756 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594762 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594767 | orchestrator | 2025-05-19 19:49:30.594771 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.594776 | orchestrator | Monday 19 May 2025 19:43:41 +0000 (0:00:00.316) 0:08:26.753 ************ 2025-05-19 19:49:30.594781 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594786 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594790 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594795 | orchestrator | 2025-05-19 19:49:30.594800 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.594805 | orchestrator | Monday 19 May 2025 19:43:42 +0000 (0:00:00.288) 0:08:27.042 ************ 2025-05-19 19:49:30.594810 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594814 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594819 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594824 | orchestrator | 2025-05-19 19:49:30.594829 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.594833 | orchestrator | Monday 19 May 2025 19:43:42 +0000 (0:00:00.482) 0:08:27.524 ************ 2025-05-19 19:49:30.594838 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594843 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594848 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594852 | orchestrator | 2025-05-19 19:49:30.594857 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.594862 | orchestrator | Monday 19 May 2025 19:43:42 +0000 (0:00:00.299) 0:08:27.823 ************ 2025-05-19 19:49:30.594866 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594871 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594876 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594881 | orchestrator | 2025-05-19 19:49:30.594886 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.594890 | orchestrator | Monday 19 May 2025 19:43:43 +0000 (0:00:00.266) 0:08:28.089 ************ 2025-05-19 19:49:30.594895 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.594900 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.594905 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.594909 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.594914 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594919 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594923 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.594928 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.594933 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594938 | orchestrator | 2025-05-19 19:49:30.594942 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.594947 | orchestrator | Monday 19 May 2025 19:43:43 +0000 (0:00:00.313) 0:08:28.402 ************ 2025-05-19 19:49:30.594952 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-19 19:49:30.594957 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-19 19:49:30.594962 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.594966 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-19 19:49:30.594971 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-19 19:49:30.594976 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.594985 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-19 19:49:30.594990 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-19 19:49:30.594994 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.594999 | orchestrator | 2025-05-19 19:49:30.595004 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.595009 | orchestrator | Monday 19 May 2025 19:43:43 +0000 (0:00:00.523) 0:08:28.926 ************ 2025-05-19 19:49:30.595013 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595018 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595023 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595027 | orchestrator | 2025-05-19 19:49:30.595032 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.595037 | orchestrator | Monday 19 May 2025 19:43:44 +0000 (0:00:00.280) 0:08:29.207 ************ 2025-05-19 19:49:30.595042 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595046 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595051 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595056 | orchestrator | 2025-05-19 19:49:30.595064 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.595069 | orchestrator | Monday 19 May 2025 19:43:44 +0000 (0:00:00.284) 0:08:29.491 ************ 2025-05-19 19:49:30.595074 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595078 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595083 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595088 | orchestrator | 2025-05-19 19:49:30.595093 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.595097 | orchestrator | Monday 19 May 2025 19:43:44 +0000 (0:00:00.293) 0:08:29.785 ************ 2025-05-19 19:49:30.595102 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595107 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595112 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595116 | orchestrator | 2025-05-19 19:49:30.595121 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.595126 | orchestrator | Monday 19 May 2025 19:43:45 +0000 (0:00:00.484) 0:08:30.269 ************ 2025-05-19 19:49:30.595131 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595135 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595140 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595145 | orchestrator | 2025-05-19 19:49:30.595150 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.595170 | orchestrator | Monday 19 May 2025 19:43:45 +0000 (0:00:00.327) 0:08:30.597 ************ 2025-05-19 19:49:30.595176 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595180 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595185 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595190 | orchestrator | 2025-05-19 19:49:30.595194 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.595199 | orchestrator | Monday 19 May 2025 19:43:45 +0000 (0:00:00.308) 0:08:30.905 ************ 2025-05-19 19:49:30.595204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.595209 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.595214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.595218 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595223 | orchestrator | 2025-05-19 19:49:30.595230 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.595238 | orchestrator | Monday 19 May 2025 19:43:46 +0000 (0:00:00.501) 0:08:31.406 ************ 2025-05-19 19:49:30.595246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.595253 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.595260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.595272 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595279 | orchestrator | 2025-05-19 19:49:30.595287 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.595296 | orchestrator | Monday 19 May 2025 19:43:46 +0000 (0:00:00.446) 0:08:31.853 ************ 2025-05-19 19:49:30.595304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.595312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.595336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.595344 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595352 | orchestrator | 2025-05-19 19:49:30.595359 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.595367 | orchestrator | Monday 19 May 2025 19:43:47 +0000 (0:00:00.798) 0:08:32.652 ************ 2025-05-19 19:49:30.595375 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595382 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595389 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595398 | orchestrator | 2025-05-19 19:49:30.595403 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.595408 | orchestrator | Monday 19 May 2025 19:43:48 +0000 (0:00:00.660) 0:08:33.313 ************ 2025-05-19 19:49:30.595412 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.595417 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595422 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.595426 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595431 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.595436 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595440 | orchestrator | 2025-05-19 19:49:30.595445 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.595450 | orchestrator | Monday 19 May 2025 19:43:48 +0000 (0:00:00.520) 0:08:33.834 ************ 2025-05-19 19:49:30.595454 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595459 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595464 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595469 | orchestrator | 2025-05-19 19:49:30.595473 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.595478 | orchestrator | Monday 19 May 2025 19:43:49 +0000 (0:00:00.346) 0:08:34.180 ************ 2025-05-19 19:49:30.595483 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595487 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595492 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595497 | orchestrator | 2025-05-19 19:49:30.595501 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.595506 | orchestrator | Monday 19 May 2025 19:43:49 +0000 (0:00:00.340) 0:08:34.520 ************ 2025-05-19 19:49:30.595511 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.595516 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595520 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.595525 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595530 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.595534 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595539 | orchestrator | 2025-05-19 19:49:30.595544 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.595552 | orchestrator | Monday 19 May 2025 19:43:50 +0000 (0:00:00.885) 0:08:35.406 ************ 2025-05-19 19:49:30.595557 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.595562 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595567 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.595584 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595591 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.595596 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595601 | orchestrator | 2025-05-19 19:49:30.595606 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.595610 | orchestrator | Monday 19 May 2025 19:43:50 +0000 (0:00:00.444) 0:08:35.850 ************ 2025-05-19 19:49:30.595615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.595620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.595625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.595629 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595657 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.595663 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.595667 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.595672 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.595681 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.595686 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.595691 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595696 | orchestrator | 2025-05-19 19:49:30.595700 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.595705 | orchestrator | Monday 19 May 2025 19:43:51 +0000 (0:00:00.636) 0:08:36.487 ************ 2025-05-19 19:49:30.595710 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595715 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595719 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595724 | orchestrator | 2025-05-19 19:49:30.595729 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.595734 | orchestrator | Monday 19 May 2025 19:43:52 +0000 (0:00:00.889) 0:08:37.376 ************ 2025-05-19 19:49:30.595738 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.595743 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595748 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.595753 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595757 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.595762 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595767 | orchestrator | 2025-05-19 19:49:30.595771 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.595776 | orchestrator | Monday 19 May 2025 19:43:53 +0000 (0:00:00.580) 0:08:37.957 ************ 2025-05-19 19:49:30.595781 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595786 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595790 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595795 | orchestrator | 2025-05-19 19:49:30.595800 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.595805 | orchestrator | Monday 19 May 2025 19:43:53 +0000 (0:00:00.842) 0:08:38.799 ************ 2025-05-19 19:49:30.595809 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595814 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595819 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595824 | orchestrator | 2025-05-19 19:49:30.595828 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-19 19:49:30.595833 | orchestrator | Monday 19 May 2025 19:43:54 +0000 (0:00:00.587) 0:08:39.387 ************ 2025-05-19 19:49:30.595838 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.595843 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.595848 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.595857 | orchestrator | 2025-05-19 19:49:30.595862 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-19 19:49:30.595867 | orchestrator | Monday 19 May 2025 19:43:55 +0000 (0:00:00.714) 0:08:40.101 ************ 2025-05-19 19:49:30.595871 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 19:49:30.595876 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:49:30.595881 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:49:30.595886 | orchestrator | 2025-05-19 19:49:30.595890 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-19 19:49:30.595895 | orchestrator | Monday 19 May 2025 19:43:55 +0000 (0:00:00.777) 0:08:40.879 ************ 2025-05-19 19:49:30.595900 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.595905 | orchestrator | 2025-05-19 19:49:30.595909 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-19 19:49:30.595914 | orchestrator | Monday 19 May 2025 19:43:56 +0000 (0:00:00.565) 0:08:41.445 ************ 2025-05-19 19:49:30.595922 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595932 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595943 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.595950 | orchestrator | 2025-05-19 19:49:30.595958 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-19 19:49:30.595970 | orchestrator | Monday 19 May 2025 19:43:57 +0000 (0:00:00.590) 0:08:42.036 ************ 2025-05-19 19:49:30.595978 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.595985 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.595993 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596002 | orchestrator | 2025-05-19 19:49:30.596011 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-19 19:49:30.596019 | orchestrator | Monday 19 May 2025 19:43:57 +0000 (0:00:00.328) 0:08:42.364 ************ 2025-05-19 19:49:30.596027 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596034 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596039 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596043 | orchestrator | 2025-05-19 19:49:30.596048 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-19 19:49:30.596053 | orchestrator | Monday 19 May 2025 19:43:57 +0000 (0:00:00.318) 0:08:42.682 ************ 2025-05-19 19:49:30.596058 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596062 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596067 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596072 | orchestrator | 2025-05-19 19:49:30.596076 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-19 19:49:30.596081 | orchestrator | Monday 19 May 2025 19:43:58 +0000 (0:00:00.320) 0:08:43.003 ************ 2025-05-19 19:49:30.596086 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.596091 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.596095 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.596100 | orchestrator | 2025-05-19 19:49:30.596125 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-19 19:49:30.596131 | orchestrator | Monday 19 May 2025 19:43:58 +0000 (0:00:00.946) 0:08:43.950 ************ 2025-05-19 19:49:30.596136 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.596140 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.596145 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.596150 | orchestrator | 2025-05-19 19:49:30.596155 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-19 19:49:30.596159 | orchestrator | Monday 19 May 2025 19:43:59 +0000 (0:00:00.346) 0:08:44.296 ************ 2025-05-19 19:49:30.596164 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 19:49:30.596169 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 19:49:30.596182 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 19:49:30.596187 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 19:49:30.596191 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 19:49:30.596196 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 19:49:30.596201 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 19:49:30.596206 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 19:49:30.596210 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 19:49:30.596215 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 19:49:30.596220 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 19:49:30.596225 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 19:49:30.596233 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 19:49:30.596244 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 19:49:30.596254 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 19:49:30.596261 | orchestrator | 2025-05-19 19:49:30.596269 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-19 19:49:30.596277 | orchestrator | Monday 19 May 2025 19:44:02 +0000 (0:00:03.138) 0:08:47.435 ************ 2025-05-19 19:49:30.596286 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596293 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596302 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596307 | orchestrator | 2025-05-19 19:49:30.596312 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-19 19:49:30.596362 | orchestrator | Monday 19 May 2025 19:44:03 +0000 (0:00:00.630) 0:08:48.065 ************ 2025-05-19 19:49:30.596370 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.596378 | orchestrator | 2025-05-19 19:49:30.596386 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-19 19:49:30.596393 | orchestrator | Monday 19 May 2025 19:44:03 +0000 (0:00:00.579) 0:08:48.644 ************ 2025-05-19 19:49:30.596402 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 19:49:30.596408 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 19:49:30.596412 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 19:49:30.596417 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-19 19:49:30.596423 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-19 19:49:30.596427 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-19 19:49:30.596432 | orchestrator | 2025-05-19 19:49:30.596437 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-19 19:49:30.596442 | orchestrator | Monday 19 May 2025 19:44:04 +0000 (0:00:01.028) 0:08:49.673 ************ 2025-05-19 19:49:30.596450 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:49:30.596455 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.596460 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 19:49:30.596465 | orchestrator | 2025-05-19 19:49:30.596469 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-19 19:49:30.596474 | orchestrator | Monday 19 May 2025 19:44:06 +0000 (0:00:01.826) 0:08:51.499 ************ 2025-05-19 19:49:30.596484 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 19:49:30.596489 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.596493 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.596498 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 19:49:30.596503 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.596507 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.596512 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 19:49:30.596517 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.596522 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.596526 | orchestrator | 2025-05-19 19:49:30.596531 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-19 19:49:30.596536 | orchestrator | Monday 19 May 2025 19:44:07 +0000 (0:00:01.027) 0:08:52.527 ************ 2025-05-19 19:49:30.596564 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.596570 | orchestrator | 2025-05-19 19:49:30.596574 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-19 19:49:30.596579 | orchestrator | Monday 19 May 2025 19:44:09 +0000 (0:00:02.329) 0:08:54.856 ************ 2025-05-19 19:49:30.596584 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.596589 | orchestrator | 2025-05-19 19:49:30.596594 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-19 19:49:30.596599 | orchestrator | Monday 19 May 2025 19:44:10 +0000 (0:00:00.869) 0:08:55.725 ************ 2025-05-19 19:49:30.596603 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596608 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596613 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596618 | orchestrator | 2025-05-19 19:49:30.596622 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-19 19:49:30.596627 | orchestrator | Monday 19 May 2025 19:44:11 +0000 (0:00:00.349) 0:08:56.075 ************ 2025-05-19 19:49:30.596632 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596637 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596642 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596646 | orchestrator | 2025-05-19 19:49:30.596651 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-19 19:49:30.596656 | orchestrator | Monday 19 May 2025 19:44:11 +0000 (0:00:00.319) 0:08:56.394 ************ 2025-05-19 19:49:30.596661 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596665 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596670 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596674 | orchestrator | 2025-05-19 19:49:30.596679 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-19 19:49:30.596683 | orchestrator | Monday 19 May 2025 19:44:11 +0000 (0:00:00.299) 0:08:56.693 ************ 2025-05-19 19:49:30.596688 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.596692 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.596697 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.596701 | orchestrator | 2025-05-19 19:49:30.596706 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-19 19:49:30.596711 | orchestrator | Monday 19 May 2025 19:44:12 +0000 (0:00:00.646) 0:08:57.340 ************ 2025-05-19 19:49:30.596715 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.596720 | orchestrator | 2025-05-19 19:49:30.596724 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-19 19:49:30.596729 | orchestrator | Monday 19 May 2025 19:44:12 +0000 (0:00:00.595) 0:08:57.936 ************ 2025-05-19 19:49:30.596733 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-6eb1ee5c-85e6-559d-849b-4772bddae6d6', 'data_vg': 'ceph-6eb1ee5c-85e6-559d-849b-4772bddae6d6'}) 2025-05-19 19:49:30.596744 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-f4656c6e-aa1c-5ab7-9900-7160e6354d4d', 'data_vg': 'ceph-f4656c6e-aa1c-5ab7-9900-7160e6354d4d'}) 2025-05-19 19:49:30.596748 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e', 'data_vg': 'ceph-54ed6fee-c89e-5ff4-bbfb-dc8e4c8c481e'}) 2025-05-19 19:49:30.596753 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-702b6aa6-b3de-5669-bdb1-4e94528c6268', 'data_vg': 'ceph-702b6aa6-b3de-5669-bdb1-4e94528c6268'}) 2025-05-19 19:49:30.596758 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5fdf60fa-c839-55c0-9693-b393079e2a5b', 'data_vg': 'ceph-5fdf60fa-c839-55c0-9693-b393079e2a5b'}) 2025-05-19 19:49:30.596762 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5646b4ad-081a-5fe7-ab17-c0ecc5756623', 'data_vg': 'ceph-5646b4ad-081a-5fe7-ab17-c0ecc5756623'}) 2025-05-19 19:49:30.596767 | orchestrator | 2025-05-19 19:49:30.596771 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-19 19:49:30.596776 | orchestrator | Monday 19 May 2025 19:44:53 +0000 (0:00:40.506) 0:09:38.442 ************ 2025-05-19 19:49:30.596783 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.596788 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.596792 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.596797 | orchestrator | 2025-05-19 19:49:30.596801 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-19 19:49:30.596806 | orchestrator | Monday 19 May 2025 19:44:54 +0000 (0:00:00.517) 0:09:38.960 ************ 2025-05-19 19:49:30.596810 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.596815 | orchestrator | 2025-05-19 19:49:30.596819 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-19 19:49:30.596824 | orchestrator | Monday 19 May 2025 19:44:54 +0000 (0:00:00.647) 0:09:39.607 ************ 2025-05-19 19:49:30.596829 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.596833 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.596838 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.596842 | orchestrator | 2025-05-19 19:49:30.596847 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-19 19:49:30.596851 | orchestrator | Monday 19 May 2025 19:44:55 +0000 (0:00:00.733) 0:09:40.341 ************ 2025-05-19 19:49:30.596855 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.596860 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.596865 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.596869 | orchestrator | 2025-05-19 19:49:30.596889 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-19 19:49:30.596895 | orchestrator | Monday 19 May 2025 19:44:57 +0000 (0:00:02.109) 0:09:42.450 ************ 2025-05-19 19:49:30.596899 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.596904 | orchestrator | 2025-05-19 19:49:30.596908 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-19 19:49:30.596913 | orchestrator | Monday 19 May 2025 19:44:58 +0000 (0:00:00.570) 0:09:43.021 ************ 2025-05-19 19:49:30.596917 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.596922 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.596926 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.596931 | orchestrator | 2025-05-19 19:49:30.596935 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-19 19:49:30.596940 | orchestrator | Monday 19 May 2025 19:44:59 +0000 (0:00:01.464) 0:09:44.486 ************ 2025-05-19 19:49:30.596944 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.596949 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.596953 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.596961 | orchestrator | 2025-05-19 19:49:30.596966 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-19 19:49:30.596971 | orchestrator | Monday 19 May 2025 19:45:00 +0000 (0:00:01.274) 0:09:45.760 ************ 2025-05-19 19:49:30.596975 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.596979 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.596984 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.596988 | orchestrator | 2025-05-19 19:49:30.596993 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-19 19:49:30.596997 | orchestrator | Monday 19 May 2025 19:45:02 +0000 (0:00:01.809) 0:09:47.570 ************ 2025-05-19 19:49:30.597002 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597007 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597011 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597016 | orchestrator | 2025-05-19 19:49:30.597020 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-19 19:49:30.597025 | orchestrator | Monday 19 May 2025 19:45:02 +0000 (0:00:00.351) 0:09:47.922 ************ 2025-05-19 19:49:30.597029 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597034 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597038 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597042 | orchestrator | 2025-05-19 19:49:30.597047 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-19 19:49:30.597052 | orchestrator | Monday 19 May 2025 19:45:03 +0000 (0:00:00.659) 0:09:48.581 ************ 2025-05-19 19:49:30.597056 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 19:49:30.597061 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-19 19:49:30.597065 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-19 19:49:30.597070 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-05-19 19:49:30.597074 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-19 19:49:30.597079 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-19 19:49:30.597083 | orchestrator | 2025-05-19 19:49:30.597088 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-19 19:49:30.597092 | orchestrator | Monday 19 May 2025 19:45:04 +0000 (0:00:01.098) 0:09:49.679 ************ 2025-05-19 19:49:30.597097 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-19 19:49:30.597101 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-19 19:49:30.597106 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-19 19:49:30.597110 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-05-19 19:49:30.597115 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-19 19:49:30.597119 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-19 19:49:30.597124 | orchestrator | 2025-05-19 19:49:30.597128 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-19 19:49:30.597133 | orchestrator | Monday 19 May 2025 19:45:08 +0000 (0:00:04.014) 0:09:53.694 ************ 2025-05-19 19:49:30.597138 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597145 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597152 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.597159 | orchestrator | 2025-05-19 19:49:30.597168 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-19 19:49:30.597176 | orchestrator | Monday 19 May 2025 19:45:11 +0000 (0:00:02.324) 0:09:56.019 ************ 2025-05-19 19:49:30.597183 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597191 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597197 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-19 19:49:30.597204 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.597209 | orchestrator | 2025-05-19 19:49:30.597214 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-19 19:49:30.597218 | orchestrator | Monday 19 May 2025 19:45:23 +0000 (0:00:12.671) 0:10:08.690 ************ 2025-05-19 19:49:30.597229 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597237 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597244 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597252 | orchestrator | 2025-05-19 19:49:30.597258 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-19 19:49:30.597265 | orchestrator | Monday 19 May 2025 19:45:24 +0000 (0:00:00.477) 0:10:09.168 ************ 2025-05-19 19:49:30.597273 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597281 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597288 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597296 | orchestrator | 2025-05-19 19:49:30.597302 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-19 19:49:30.597307 | orchestrator | Monday 19 May 2025 19:45:25 +0000 (0:00:01.180) 0:10:10.349 ************ 2025-05-19 19:49:30.597311 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.597330 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.597336 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.597341 | orchestrator | 2025-05-19 19:49:30.597345 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-19 19:49:30.597370 | orchestrator | Monday 19 May 2025 19:45:26 +0000 (0:00:00.970) 0:10:11.319 ************ 2025-05-19 19:49:30.597375 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.597380 | orchestrator | 2025-05-19 19:49:30.597384 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-19 19:49:30.597389 | orchestrator | Monday 19 May 2025 19:45:26 +0000 (0:00:00.563) 0:10:11.883 ************ 2025-05-19 19:49:30.597393 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.597398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.597402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.597407 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597411 | orchestrator | 2025-05-19 19:49:30.597415 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-19 19:49:30.597420 | orchestrator | Monday 19 May 2025 19:45:27 +0000 (0:00:00.430) 0:10:12.313 ************ 2025-05-19 19:49:30.597424 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597429 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597433 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597437 | orchestrator | 2025-05-19 19:49:30.597442 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-19 19:49:30.597446 | orchestrator | Monday 19 May 2025 19:45:27 +0000 (0:00:00.322) 0:10:12.636 ************ 2025-05-19 19:49:30.597451 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597455 | orchestrator | 2025-05-19 19:49:30.597460 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-19 19:49:30.597464 | orchestrator | Monday 19 May 2025 19:45:28 +0000 (0:00:00.575) 0:10:13.211 ************ 2025-05-19 19:49:30.597469 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597473 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597478 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597482 | orchestrator | 2025-05-19 19:49:30.597487 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-19 19:49:30.597491 | orchestrator | Monday 19 May 2025 19:45:28 +0000 (0:00:00.342) 0:10:13.554 ************ 2025-05-19 19:49:30.597496 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597500 | orchestrator | 2025-05-19 19:49:30.597505 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-19 19:49:30.597509 | orchestrator | Monday 19 May 2025 19:45:28 +0000 (0:00:00.252) 0:10:13.807 ************ 2025-05-19 19:49:30.597514 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597518 | orchestrator | 2025-05-19 19:49:30.597523 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-19 19:49:30.597533 | orchestrator | Monday 19 May 2025 19:45:29 +0000 (0:00:00.255) 0:10:14.063 ************ 2025-05-19 19:49:30.597538 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597542 | orchestrator | 2025-05-19 19:49:30.597547 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-19 19:49:30.597551 | orchestrator | Monday 19 May 2025 19:45:29 +0000 (0:00:00.149) 0:10:14.212 ************ 2025-05-19 19:49:30.597556 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597560 | orchestrator | 2025-05-19 19:49:30.597565 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-19 19:49:30.597569 | orchestrator | Monday 19 May 2025 19:45:29 +0000 (0:00:00.272) 0:10:14.484 ************ 2025-05-19 19:49:30.597574 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597578 | orchestrator | 2025-05-19 19:49:30.597583 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-19 19:49:30.597587 | orchestrator | Monday 19 May 2025 19:45:29 +0000 (0:00:00.247) 0:10:14.732 ************ 2025-05-19 19:49:30.597592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.597596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.597601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.597605 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597609 | orchestrator | 2025-05-19 19:49:30.597614 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-19 19:49:30.597618 | orchestrator | Monday 19 May 2025 19:45:30 +0000 (0:00:00.459) 0:10:15.192 ************ 2025-05-19 19:49:30.597623 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597627 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597632 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597636 | orchestrator | 2025-05-19 19:49:30.597641 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-19 19:49:30.597648 | orchestrator | Monday 19 May 2025 19:45:30 +0000 (0:00:00.731) 0:10:15.923 ************ 2025-05-19 19:49:30.597653 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597658 | orchestrator | 2025-05-19 19:49:30.597662 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-19 19:49:30.597667 | orchestrator | Monday 19 May 2025 19:45:31 +0000 (0:00:00.256) 0:10:16.180 ************ 2025-05-19 19:49:30.597671 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597675 | orchestrator | 2025-05-19 19:49:30.597680 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.597684 | orchestrator | Monday 19 May 2025 19:45:31 +0000 (0:00:00.250) 0:10:16.430 ************ 2025-05-19 19:49:30.597689 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.597693 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.597698 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.597702 | orchestrator | 2025-05-19 19:49:30.597707 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-19 19:49:30.597711 | orchestrator | 2025-05-19 19:49:30.597716 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.597720 | orchestrator | Monday 19 May 2025 19:45:34 +0000 (0:00:03.067) 0:10:19.498 ************ 2025-05-19 19:49:30.597740 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.597748 | orchestrator | 2025-05-19 19:49:30.597752 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.597757 | orchestrator | Monday 19 May 2025 19:45:35 +0000 (0:00:01.423) 0:10:20.921 ************ 2025-05-19 19:49:30.597761 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597766 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.597770 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597775 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.597779 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597790 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.597795 | orchestrator | 2025-05-19 19:49:30.597799 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.597804 | orchestrator | Monday 19 May 2025 19:45:36 +0000 (0:00:00.778) 0:10:21.700 ************ 2025-05-19 19:49:30.597808 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.597812 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.597817 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.597821 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.597826 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.597830 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.597835 | orchestrator | 2025-05-19 19:49:30.597840 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.597844 | orchestrator | Monday 19 May 2025 19:45:38 +0000 (0:00:01.553) 0:10:23.254 ************ 2025-05-19 19:49:30.597849 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.597853 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.597857 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.597862 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.597866 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.597871 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.597875 | orchestrator | 2025-05-19 19:49:30.597880 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.597884 | orchestrator | Monday 19 May 2025 19:45:39 +0000 (0:00:01.492) 0:10:24.746 ************ 2025-05-19 19:49:30.597889 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.597893 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.597898 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.597902 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.597907 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.597911 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.597916 | orchestrator | 2025-05-19 19:49:30.597920 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.597925 | orchestrator | Monday 19 May 2025 19:45:41 +0000 (0:00:01.331) 0:10:26.078 ************ 2025-05-19 19:49:30.597929 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597934 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.597938 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597943 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597947 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.597952 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.597956 | orchestrator | 2025-05-19 19:49:30.597961 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.597965 | orchestrator | Monday 19 May 2025 19:45:42 +0000 (0:00:01.105) 0:10:27.184 ************ 2025-05-19 19:49:30.597970 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.597974 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.597979 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.597983 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.597988 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.597992 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.597996 | orchestrator | 2025-05-19 19:49:30.598001 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.598006 | orchestrator | Monday 19 May 2025 19:45:42 +0000 (0:00:00.691) 0:10:27.875 ************ 2025-05-19 19:49:30.598010 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598035 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598041 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598045 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598050 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598054 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598059 | orchestrator | 2025-05-19 19:49:30.598063 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.598071 | orchestrator | Monday 19 May 2025 19:45:43 +0000 (0:00:00.970) 0:10:28.846 ************ 2025-05-19 19:49:30.598076 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598080 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598084 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598089 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598093 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598098 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598102 | orchestrator | 2025-05-19 19:49:30.598110 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.598114 | orchestrator | Monday 19 May 2025 19:45:44 +0000 (0:00:00.679) 0:10:29.526 ************ 2025-05-19 19:49:30.598119 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598123 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598128 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598132 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598136 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598141 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598145 | orchestrator | 2025-05-19 19:49:30.598150 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.598154 | orchestrator | Monday 19 May 2025 19:45:45 +0000 (0:00:01.128) 0:10:30.655 ************ 2025-05-19 19:49:30.598159 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598163 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598168 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598172 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598177 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598181 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598185 | orchestrator | 2025-05-19 19:49:30.598190 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.598195 | orchestrator | Monday 19 May 2025 19:45:46 +0000 (0:00:00.742) 0:10:31.397 ************ 2025-05-19 19:49:30.598199 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.598204 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.598226 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.598235 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.598243 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.598250 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.598258 | orchestrator | 2025-05-19 19:49:30.598265 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.598272 | orchestrator | Monday 19 May 2025 19:45:47 +0000 (0:00:01.438) 0:10:32.835 ************ 2025-05-19 19:49:30.598281 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598289 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598297 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598304 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598312 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598333 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598338 | orchestrator | 2025-05-19 19:49:30.598343 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.598347 | orchestrator | Monday 19 May 2025 19:45:48 +0000 (0:00:00.745) 0:10:33.581 ************ 2025-05-19 19:49:30.598352 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.598356 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.598361 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.598365 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598369 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598374 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598378 | orchestrator | 2025-05-19 19:49:30.598383 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.598387 | orchestrator | Monday 19 May 2025 19:45:49 +0000 (0:00:01.083) 0:10:34.664 ************ 2025-05-19 19:49:30.598394 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598401 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598413 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598474 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.598483 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.598490 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.598496 | orchestrator | 2025-05-19 19:49:30.598502 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.598509 | orchestrator | Monday 19 May 2025 19:45:50 +0000 (0:00:00.594) 0:10:35.259 ************ 2025-05-19 19:49:30.598515 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598521 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598528 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598534 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.598540 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.598547 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.598554 | orchestrator | 2025-05-19 19:49:30.598561 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.598568 | orchestrator | Monday 19 May 2025 19:45:51 +0000 (0:00:00.783) 0:10:36.043 ************ 2025-05-19 19:49:30.598576 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598583 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598590 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598598 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.598605 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.598613 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.598622 | orchestrator | 2025-05-19 19:49:30.598629 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.598650 | orchestrator | Monday 19 May 2025 19:45:51 +0000 (0:00:00.606) 0:10:36.650 ************ 2025-05-19 19:49:30.598659 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598666 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598674 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598680 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598686 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598693 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598700 | orchestrator | 2025-05-19 19:49:30.598708 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.598715 | orchestrator | Monday 19 May 2025 19:45:52 +0000 (0:00:00.710) 0:10:37.360 ************ 2025-05-19 19:49:30.598723 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598730 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598738 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598742 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598747 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598752 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598756 | orchestrator | 2025-05-19 19:49:30.598760 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.598765 | orchestrator | Monday 19 May 2025 19:45:52 +0000 (0:00:00.585) 0:10:37.946 ************ 2025-05-19 19:49:30.598770 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.598774 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.598779 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.598783 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598788 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598799 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598803 | orchestrator | 2025-05-19 19:49:30.598808 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.598812 | orchestrator | Monday 19 May 2025 19:45:53 +0000 (0:00:00.750) 0:10:38.697 ************ 2025-05-19 19:49:30.598819 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.598826 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.598833 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.598840 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.598846 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.598852 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.598859 | orchestrator | 2025-05-19 19:49:30.598878 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.598885 | orchestrator | Monday 19 May 2025 19:45:54 +0000 (0:00:00.818) 0:10:39.516 ************ 2025-05-19 19:49:30.598893 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.598900 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.598907 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.598914 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.598923 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.598930 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.598937 | orchestrator | 2025-05-19 19:49:30.598944 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.598951 | orchestrator | Monday 19 May 2025 19:45:55 +0000 (0:00:01.201) 0:10:40.717 ************ 2025-05-19 19:49:30.599016 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599034 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599039 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599043 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599048 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599052 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599057 | orchestrator | 2025-05-19 19:49:30.599062 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.599066 | orchestrator | Monday 19 May 2025 19:45:56 +0000 (0:00:00.690) 0:10:41.407 ************ 2025-05-19 19:49:30.599071 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599075 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599080 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599084 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599089 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599093 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599097 | orchestrator | 2025-05-19 19:49:30.599102 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.599106 | orchestrator | Monday 19 May 2025 19:45:57 +0000 (0:00:01.029) 0:10:42.437 ************ 2025-05-19 19:49:30.599111 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599115 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599120 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599124 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599129 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599133 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599138 | orchestrator | 2025-05-19 19:49:30.599142 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.599147 | orchestrator | Monday 19 May 2025 19:45:58 +0000 (0:00:00.776) 0:10:43.213 ************ 2025-05-19 19:49:30.599151 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599155 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599160 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599164 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599169 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599173 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599178 | orchestrator | 2025-05-19 19:49:30.599182 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.599187 | orchestrator | Monday 19 May 2025 19:45:59 +0000 (0:00:00.817) 0:10:44.031 ************ 2025-05-19 19:49:30.599191 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599196 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599200 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599204 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599209 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599213 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599218 | orchestrator | 2025-05-19 19:49:30.599222 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.599228 | orchestrator | Monday 19 May 2025 19:45:59 +0000 (0:00:00.528) 0:10:44.559 ************ 2025-05-19 19:49:30.599242 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599249 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599256 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599264 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599270 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599278 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599285 | orchestrator | 2025-05-19 19:49:30.599291 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.599299 | orchestrator | Monday 19 May 2025 19:46:00 +0000 (0:00:00.759) 0:10:45.319 ************ 2025-05-19 19:49:30.599307 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599313 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599364 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599371 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599378 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599385 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599390 | orchestrator | 2025-05-19 19:49:30.599394 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.599399 | orchestrator | Monday 19 May 2025 19:46:00 +0000 (0:00:00.598) 0:10:45.917 ************ 2025-05-19 19:49:30.599403 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599408 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599412 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599417 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599421 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599425 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599430 | orchestrator | 2025-05-19 19:49:30.599434 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.599445 | orchestrator | Monday 19 May 2025 19:46:01 +0000 (0:00:00.681) 0:10:46.599 ************ 2025-05-19 19:49:30.599452 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599459 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599473 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599481 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599488 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599495 | orchestrator | 2025-05-19 19:49:30.599502 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.599510 | orchestrator | Monday 19 May 2025 19:46:02 +0000 (0:00:00.493) 0:10:47.092 ************ 2025-05-19 19:49:30.599517 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599525 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599533 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599541 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599548 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599556 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599563 | orchestrator | 2025-05-19 19:49:30.599570 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.599577 | orchestrator | Monday 19 May 2025 19:46:02 +0000 (0:00:00.669) 0:10:47.762 ************ 2025-05-19 19:49:30.599585 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599592 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599599 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599645 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599651 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599656 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599660 | orchestrator | 2025-05-19 19:49:30.599665 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.599670 | orchestrator | Monday 19 May 2025 19:46:03 +0000 (0:00:00.590) 0:10:48.353 ************ 2025-05-19 19:49:30.599675 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.599686 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-19 19:49:30.599691 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.599695 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-19 19:49:30.599700 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599704 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.599709 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-19 19:49:30.599713 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599718 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.599722 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.599727 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599731 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.599736 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599740 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.599745 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599749 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.599753 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.599758 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599762 | orchestrator | 2025-05-19 19:49:30.599767 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.599771 | orchestrator | Monday 19 May 2025 19:46:04 +0000 (0:00:00.749) 0:10:49.102 ************ 2025-05-19 19:49:30.599775 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-19 19:49:30.599780 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-19 19:49:30.599784 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599788 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-19 19:49:30.599792 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-19 19:49:30.599796 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-19 19:49:30.599800 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-19 19:49:30.599804 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-19 19:49:30.599808 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-19 19:49:30.599812 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599817 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-19 19:49:30.599821 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-19 19:49:30.599825 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599829 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599833 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599837 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-19 19:49:30.599841 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-19 19:49:30.599845 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599849 | orchestrator | 2025-05-19 19:49:30.599853 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.599857 | orchestrator | Monday 19 May 2025 19:46:04 +0000 (0:00:00.605) 0:10:49.708 ************ 2025-05-19 19:49:30.599861 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599866 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599870 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599874 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599878 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599882 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599886 | orchestrator | 2025-05-19 19:49:30.599890 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.599894 | orchestrator | Monday 19 May 2025 19:46:05 +0000 (0:00:00.732) 0:10:50.440 ************ 2025-05-19 19:49:30.599898 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599902 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599909 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599913 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599917 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599921 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599926 | orchestrator | 2025-05-19 19:49:30.599933 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.599939 | orchestrator | Monday 19 May 2025 19:46:06 +0000 (0:00:00.684) 0:10:51.124 ************ 2025-05-19 19:49:30.599943 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599947 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.599951 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.599955 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.599959 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.599963 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.599969 | orchestrator | 2025-05-19 19:49:30.599976 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.599982 | orchestrator | Monday 19 May 2025 19:46:06 +0000 (0:00:00.827) 0:10:51.952 ************ 2025-05-19 19:49:30.599988 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.599994 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600000 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600006 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600012 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600018 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600024 | orchestrator | 2025-05-19 19:49:30.600030 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.600037 | orchestrator | Monday 19 May 2025 19:46:07 +0000 (0:00:00.609) 0:10:52.561 ************ 2025-05-19 19:49:30.600068 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600076 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600082 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600088 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600094 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600100 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600106 | orchestrator | 2025-05-19 19:49:30.600113 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.600119 | orchestrator | Monday 19 May 2025 19:46:08 +0000 (0:00:01.005) 0:10:53.567 ************ 2025-05-19 19:49:30.600126 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600133 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600139 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600146 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600153 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600159 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600166 | orchestrator | 2025-05-19 19:49:30.600172 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.600179 | orchestrator | Monday 19 May 2025 19:46:09 +0000 (0:00:00.811) 0:10:54.379 ************ 2025-05-19 19:49:30.600187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.600193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.600200 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.600206 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600212 | orchestrator | 2025-05-19 19:49:30.600218 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.600225 | orchestrator | Monday 19 May 2025 19:46:09 +0000 (0:00:00.427) 0:10:54.806 ************ 2025-05-19 19:49:30.600232 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.600239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.600245 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.600260 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600267 | orchestrator | 2025-05-19 19:49:30.600274 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.600281 | orchestrator | Monday 19 May 2025 19:46:10 +0000 (0:00:00.460) 0:10:55.267 ************ 2025-05-19 19:49:30.600287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.600294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.600301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.600308 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600335 | orchestrator | 2025-05-19 19:49:30.600343 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.600350 | orchestrator | Monday 19 May 2025 19:46:11 +0000 (0:00:00.778) 0:10:56.046 ************ 2025-05-19 19:49:30.600357 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600369 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600377 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600382 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600386 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600390 | orchestrator | 2025-05-19 19:49:30.600397 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.600403 | orchestrator | Monday 19 May 2025 19:46:11 +0000 (0:00:00.906) 0:10:56.953 ************ 2025-05-19 19:49:30.600409 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.600414 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.600421 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600428 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600435 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.600442 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.600448 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600455 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600462 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.600469 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600475 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.600483 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600487 | orchestrator | 2025-05-19 19:49:30.600492 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.600496 | orchestrator | Monday 19 May 2025 19:46:12 +0000 (0:00:00.866) 0:10:57.819 ************ 2025-05-19 19:49:30.600500 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600505 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600518 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600525 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600532 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600539 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600545 | orchestrator | 2025-05-19 19:49:30.600552 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.600559 | orchestrator | Monday 19 May 2025 19:46:13 +0000 (0:00:01.068) 0:10:58.888 ************ 2025-05-19 19:49:30.600565 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600569 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600573 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600577 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600582 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600586 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600590 | orchestrator | 2025-05-19 19:49:30.600594 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.600598 | orchestrator | Monday 19 May 2025 19:46:14 +0000 (0:00:00.726) 0:10:59.614 ************ 2025-05-19 19:49:30.600602 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 19:49:30.600611 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600615 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 19:49:30.600619 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 19:49:30.600624 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600627 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.600665 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600672 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600678 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.600685 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600692 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.600698 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600705 | orchestrator | 2025-05-19 19:49:30.600711 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.600718 | orchestrator | Monday 19 May 2025 19:46:16 +0000 (0:00:01.381) 0:11:00.995 ************ 2025-05-19 19:49:30.600724 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600732 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600738 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600742 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.600746 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600750 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.600755 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600759 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.600763 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600767 | orchestrator | 2025-05-19 19:49:30.600771 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.600775 | orchestrator | Monday 19 May 2025 19:46:16 +0000 (0:00:00.764) 0:11:01.760 ************ 2025-05-19 19:49:30.600779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 19:49:30.600783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 19:49:30.600787 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 19:49:30.600791 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 19:49:30.600799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 19:49:30.600803 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 19:49:30.600807 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600811 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 19:49:30.600815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 19:49:30.600819 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 19:49:30.600823 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.600831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.600835 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.600839 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.600847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.600851 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.600855 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.600863 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.600872 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.600876 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600880 | orchestrator | 2025-05-19 19:49:30.600884 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.600888 | orchestrator | Monday 19 May 2025 19:46:18 +0000 (0:00:01.660) 0:11:03.420 ************ 2025-05-19 19:49:30.600892 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600896 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600900 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600904 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600908 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600912 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600916 | orchestrator | 2025-05-19 19:49:30.600920 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.600928 | orchestrator | Monday 19 May 2025 19:46:19 +0000 (0:00:01.486) 0:11:04.907 ************ 2025-05-19 19:49:30.600932 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600936 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600940 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600945 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.600949 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.600953 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.600957 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.600961 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.600965 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.600969 | orchestrator | 2025-05-19 19:49:30.600973 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.600977 | orchestrator | Monday 19 May 2025 19:46:21 +0000 (0:00:01.505) 0:11:06.412 ************ 2025-05-19 19:49:30.600981 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.600985 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.600989 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.600996 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601002 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601009 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601015 | orchestrator | 2025-05-19 19:49:30.601022 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.601034 | orchestrator | Monday 19 May 2025 19:46:22 +0000 (0:00:01.455) 0:11:07.868 ************ 2025-05-19 19:49:30.601042 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:49:30.601049 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:49:30.601056 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:49:30.601062 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601069 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601075 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601082 | orchestrator | 2025-05-19 19:49:30.601088 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-19 19:49:30.601094 | orchestrator | Monday 19 May 2025 19:46:24 +0000 (0:00:01.480) 0:11:09.349 ************ 2025-05-19 19:49:30.601101 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.601107 | orchestrator | 2025-05-19 19:49:30.601114 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-19 19:49:30.601120 | orchestrator | Monday 19 May 2025 19:46:28 +0000 (0:00:04.457) 0:11:13.806 ************ 2025-05-19 19:49:30.601127 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.601134 | orchestrator | 2025-05-19 19:49:30.601139 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-19 19:49:30.601143 | orchestrator | Monday 19 May 2025 19:46:30 +0000 (0:00:01.795) 0:11:15.602 ************ 2025-05-19 19:49:30.601147 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.601151 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.601155 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.601167 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.601174 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.601181 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.601187 | orchestrator | 2025-05-19 19:49:30.601194 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-19 19:49:30.601200 | orchestrator | Monday 19 May 2025 19:46:32 +0000 (0:00:01.743) 0:11:17.345 ************ 2025-05-19 19:49:30.601208 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.601214 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.601224 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.601232 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.601240 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.601246 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.601252 | orchestrator | 2025-05-19 19:49:30.601258 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-19 19:49:30.601265 | orchestrator | Monday 19 May 2025 19:46:33 +0000 (0:00:01.067) 0:11:18.413 ************ 2025-05-19 19:49:30.601271 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.601280 | orchestrator | 2025-05-19 19:49:30.601287 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-19 19:49:30.601293 | orchestrator | Monday 19 May 2025 19:46:34 +0000 (0:00:01.383) 0:11:19.796 ************ 2025-05-19 19:49:30.601300 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.601307 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.601313 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.601371 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.601378 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.601387 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.601392 | orchestrator | 2025-05-19 19:49:30.601396 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-19 19:49:30.601400 | orchestrator | Monday 19 May 2025 19:46:36 +0000 (0:00:01.921) 0:11:21.717 ************ 2025-05-19 19:49:30.601404 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.601408 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.601412 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.601416 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.601423 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.601429 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.601436 | orchestrator | 2025-05-19 19:49:30.601442 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-19 19:49:30.601449 | orchestrator | Monday 19 May 2025 19:46:41 +0000 (0:00:04.374) 0:11:26.092 ************ 2025-05-19 19:49:30.601456 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.601463 | orchestrator | 2025-05-19 19:49:30.601469 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-19 19:49:30.601475 | orchestrator | Monday 19 May 2025 19:46:43 +0000 (0:00:02.802) 0:11:28.895 ************ 2025-05-19 19:49:30.601481 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.601487 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.601493 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.601507 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.601515 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.601521 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.601528 | orchestrator | 2025-05-19 19:49:30.601535 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-19 19:49:30.601542 | orchestrator | Monday 19 May 2025 19:46:44 +0000 (0:00:01.013) 0:11:29.909 ************ 2025-05-19 19:49:30.601546 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:49:30.601550 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:49:30.601557 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.601572 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.601579 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:49:30.601586 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.601592 | orchestrator | 2025-05-19 19:49:30.601598 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-19 19:49:30.601605 | orchestrator | Monday 19 May 2025 19:46:47 +0000 (0:00:02.927) 0:11:32.837 ************ 2025-05-19 19:49:30.601611 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:49:30.601617 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:49:30.601624 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:49:30.601630 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.601637 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.601643 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.601651 | orchestrator | 2025-05-19 19:49:30.601658 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-19 19:49:30.601665 | orchestrator | 2025-05-19 19:49:30.601672 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.601691 | orchestrator | Monday 19 May 2025 19:46:50 +0000 (0:00:02.761) 0:11:35.599 ************ 2025-05-19 19:49:30.601698 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.601705 | orchestrator | 2025-05-19 19:49:30.601711 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.601719 | orchestrator | Monday 19 May 2025 19:46:51 +0000 (0:00:00.807) 0:11:36.406 ************ 2025-05-19 19:49:30.601725 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601732 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601739 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601744 | orchestrator | 2025-05-19 19:49:30.601748 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.601752 | orchestrator | Monday 19 May 2025 19:46:51 +0000 (0:00:00.332) 0:11:36.739 ************ 2025-05-19 19:49:30.601756 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.601760 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.601764 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.601768 | orchestrator | 2025-05-19 19:49:30.601772 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.601776 | orchestrator | Monday 19 May 2025 19:46:52 +0000 (0:00:00.736) 0:11:37.476 ************ 2025-05-19 19:49:30.601780 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.601784 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.601788 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.601792 | orchestrator | 2025-05-19 19:49:30.601796 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.601800 | orchestrator | Monday 19 May 2025 19:46:53 +0000 (0:00:00.742) 0:11:38.218 ************ 2025-05-19 19:49:30.601804 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.601808 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.601812 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.601816 | orchestrator | 2025-05-19 19:49:30.601820 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.601824 | orchestrator | Monday 19 May 2025 19:46:54 +0000 (0:00:01.134) 0:11:39.353 ************ 2025-05-19 19:49:30.601828 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601832 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601836 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601840 | orchestrator | 2025-05-19 19:49:30.601844 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.601849 | orchestrator | Monday 19 May 2025 19:46:54 +0000 (0:00:00.371) 0:11:39.724 ************ 2025-05-19 19:49:30.601853 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601857 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601860 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601864 | orchestrator | 2025-05-19 19:49:30.601868 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.601878 | orchestrator | Monday 19 May 2025 19:46:55 +0000 (0:00:00.378) 0:11:40.103 ************ 2025-05-19 19:49:30.601882 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601886 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601890 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601894 | orchestrator | 2025-05-19 19:49:30.601898 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.601902 | orchestrator | Monday 19 May 2025 19:46:55 +0000 (0:00:00.393) 0:11:40.496 ************ 2025-05-19 19:49:30.601906 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601911 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601914 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601918 | orchestrator | 2025-05-19 19:49:30.601922 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.601925 | orchestrator | Monday 19 May 2025 19:46:56 +0000 (0:00:00.732) 0:11:41.229 ************ 2025-05-19 19:49:30.601929 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601933 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601936 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601940 | orchestrator | 2025-05-19 19:49:30.601943 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.601948 | orchestrator | Monday 19 May 2025 19:46:56 +0000 (0:00:00.373) 0:11:41.602 ************ 2025-05-19 19:49:30.601954 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.601960 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.601965 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.601971 | orchestrator | 2025-05-19 19:49:30.601977 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.601981 | orchestrator | Monday 19 May 2025 19:46:57 +0000 (0:00:00.396) 0:11:41.998 ************ 2025-05-19 19:49:30.601984 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.601995 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.601999 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.602003 | orchestrator | 2025-05-19 19:49:30.602007 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.602010 | orchestrator | Monday 19 May 2025 19:46:57 +0000 (0:00:00.844) 0:11:42.842 ************ 2025-05-19 19:49:30.602042 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602046 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602049 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602053 | orchestrator | 2025-05-19 19:49:30.602057 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.602060 | orchestrator | Monday 19 May 2025 19:46:58 +0000 (0:00:00.531) 0:11:43.374 ************ 2025-05-19 19:49:30.602064 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602068 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602072 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602075 | orchestrator | 2025-05-19 19:49:30.602079 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.602083 | orchestrator | Monday 19 May 2025 19:46:58 +0000 (0:00:00.307) 0:11:43.681 ************ 2025-05-19 19:49:30.602087 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.602090 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.602094 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.602098 | orchestrator | 2025-05-19 19:49:30.602101 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.602110 | orchestrator | Monday 19 May 2025 19:46:59 +0000 (0:00:00.337) 0:11:44.019 ************ 2025-05-19 19:49:30.602114 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.602118 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.602121 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.602125 | orchestrator | 2025-05-19 19:49:30.602129 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.602137 | orchestrator | Monday 19 May 2025 19:46:59 +0000 (0:00:00.315) 0:11:44.335 ************ 2025-05-19 19:49:30.602141 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.602144 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.602148 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.602152 | orchestrator | 2025-05-19 19:49:30.602156 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.602159 | orchestrator | Monday 19 May 2025 19:46:59 +0000 (0:00:00.554) 0:11:44.889 ************ 2025-05-19 19:49:30.602163 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602167 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602170 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602174 | orchestrator | 2025-05-19 19:49:30.602178 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.602182 | orchestrator | Monday 19 May 2025 19:47:00 +0000 (0:00:00.310) 0:11:45.199 ************ 2025-05-19 19:49:30.602185 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602189 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602193 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602196 | orchestrator | 2025-05-19 19:49:30.602200 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.602204 | orchestrator | Monday 19 May 2025 19:47:00 +0000 (0:00:00.333) 0:11:45.533 ************ 2025-05-19 19:49:30.602207 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602211 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602215 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602218 | orchestrator | 2025-05-19 19:49:30.602222 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.602226 | orchestrator | Monday 19 May 2025 19:47:00 +0000 (0:00:00.313) 0:11:45.846 ************ 2025-05-19 19:49:30.602230 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.602235 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.602241 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.602246 | orchestrator | 2025-05-19 19:49:30.602251 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.602259 | orchestrator | Monday 19 May 2025 19:47:01 +0000 (0:00:00.567) 0:11:46.413 ************ 2025-05-19 19:49:30.602267 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602272 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602278 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602284 | orchestrator | 2025-05-19 19:49:30.602289 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.602296 | orchestrator | Monday 19 May 2025 19:47:01 +0000 (0:00:00.330) 0:11:46.744 ************ 2025-05-19 19:49:30.602301 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602307 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602313 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602371 | orchestrator | 2025-05-19 19:49:30.602376 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.602381 | orchestrator | Monday 19 May 2025 19:47:02 +0000 (0:00:00.294) 0:11:47.039 ************ 2025-05-19 19:49:30.602386 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602392 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602397 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602402 | orchestrator | 2025-05-19 19:49:30.602407 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.602413 | orchestrator | Monday 19 May 2025 19:47:02 +0000 (0:00:00.286) 0:11:47.326 ************ 2025-05-19 19:49:30.602419 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602424 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602430 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602436 | orchestrator | 2025-05-19 19:49:30.602442 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.602448 | orchestrator | Monday 19 May 2025 19:47:02 +0000 (0:00:00.544) 0:11:47.870 ************ 2025-05-19 19:49:30.602461 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602466 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602470 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602474 | orchestrator | 2025-05-19 19:49:30.602477 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.602481 | orchestrator | Monday 19 May 2025 19:47:03 +0000 (0:00:00.318) 0:11:48.189 ************ 2025-05-19 19:49:30.602485 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602492 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602496 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602499 | orchestrator | 2025-05-19 19:49:30.602503 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.602507 | orchestrator | Monday 19 May 2025 19:47:03 +0000 (0:00:00.284) 0:11:48.474 ************ 2025-05-19 19:49:30.602511 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602514 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602518 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602522 | orchestrator | 2025-05-19 19:49:30.602525 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.602529 | orchestrator | Monday 19 May 2025 19:47:03 +0000 (0:00:00.302) 0:11:48.776 ************ 2025-05-19 19:49:30.602533 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602537 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602541 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602544 | orchestrator | 2025-05-19 19:49:30.602548 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.602552 | orchestrator | Monday 19 May 2025 19:47:04 +0000 (0:00:00.643) 0:11:49.420 ************ 2025-05-19 19:49:30.602555 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602559 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602563 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602566 | orchestrator | 2025-05-19 19:49:30.602577 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.602583 | orchestrator | Monday 19 May 2025 19:47:04 +0000 (0:00:00.353) 0:11:49.773 ************ 2025-05-19 19:49:30.602591 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602595 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602598 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602602 | orchestrator | 2025-05-19 19:49:30.602606 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.602610 | orchestrator | Monday 19 May 2025 19:47:05 +0000 (0:00:00.337) 0:11:50.110 ************ 2025-05-19 19:49:30.602613 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602617 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602621 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602624 | orchestrator | 2025-05-19 19:49:30.602628 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.602632 | orchestrator | Monday 19 May 2025 19:47:05 +0000 (0:00:00.346) 0:11:50.457 ************ 2025-05-19 19:49:30.602635 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602639 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602643 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602646 | orchestrator | 2025-05-19 19:49:30.602650 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.602654 | orchestrator | Monday 19 May 2025 19:47:06 +0000 (0:00:00.712) 0:11:51.169 ************ 2025-05-19 19:49:30.602658 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.602661 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.602665 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602669 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.602672 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.602679 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602683 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.602686 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.602690 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602694 | orchestrator | 2025-05-19 19:49:30.602698 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.602701 | orchestrator | Monday 19 May 2025 19:47:06 +0000 (0:00:00.375) 0:11:51.545 ************ 2025-05-19 19:49:30.602705 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-19 19:49:30.602709 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-19 19:49:30.602712 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602716 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-19 19:49:30.602720 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-19 19:49:30.602724 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602727 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-19 19:49:30.602731 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-19 19:49:30.602735 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602738 | orchestrator | 2025-05-19 19:49:30.602742 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.602746 | orchestrator | Monday 19 May 2025 19:47:06 +0000 (0:00:00.379) 0:11:51.925 ************ 2025-05-19 19:49:30.602750 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602753 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602757 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602761 | orchestrator | 2025-05-19 19:49:30.602764 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.602768 | orchestrator | Monday 19 May 2025 19:47:07 +0000 (0:00:00.350) 0:11:52.276 ************ 2025-05-19 19:49:30.602772 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602775 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602779 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602783 | orchestrator | 2025-05-19 19:49:30.602787 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.602791 | orchestrator | Monday 19 May 2025 19:47:08 +0000 (0:00:00.708) 0:11:52.984 ************ 2025-05-19 19:49:30.602794 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602798 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602802 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602805 | orchestrator | 2025-05-19 19:49:30.602809 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.602815 | orchestrator | Monday 19 May 2025 19:47:08 +0000 (0:00:00.388) 0:11:53.372 ************ 2025-05-19 19:49:30.602819 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602823 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602826 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602830 | orchestrator | 2025-05-19 19:49:30.602834 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.602838 | orchestrator | Monday 19 May 2025 19:47:08 +0000 (0:00:00.363) 0:11:53.736 ************ 2025-05-19 19:49:30.602841 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602845 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602849 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602852 | orchestrator | 2025-05-19 19:49:30.602856 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.602860 | orchestrator | Monday 19 May 2025 19:47:09 +0000 (0:00:00.348) 0:11:54.085 ************ 2025-05-19 19:49:30.602864 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602867 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602871 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602878 | orchestrator | 2025-05-19 19:49:30.602881 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.602885 | orchestrator | Monday 19 May 2025 19:47:09 +0000 (0:00:00.726) 0:11:54.812 ************ 2025-05-19 19:49:30.602889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.602895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.602899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.602903 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602907 | orchestrator | 2025-05-19 19:49:30.602910 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.602914 | orchestrator | Monday 19 May 2025 19:47:10 +0000 (0:00:00.470) 0:11:55.282 ************ 2025-05-19 19:49:30.602918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.602921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.602925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.602929 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602932 | orchestrator | 2025-05-19 19:49:30.602936 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.602940 | orchestrator | Monday 19 May 2025 19:47:10 +0000 (0:00:00.445) 0:11:55.728 ************ 2025-05-19 19:49:30.602944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.602947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.602951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.602955 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602958 | orchestrator | 2025-05-19 19:49:30.602962 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.602966 | orchestrator | Monday 19 May 2025 19:47:11 +0000 (0:00:00.460) 0:11:56.188 ************ 2025-05-19 19:49:30.602969 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602973 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.602977 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.602980 | orchestrator | 2025-05-19 19:49:30.602984 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.602988 | orchestrator | Monday 19 May 2025 19:47:11 +0000 (0:00:00.358) 0:11:56.547 ************ 2025-05-19 19:49:30.602992 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.602995 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.602999 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.603003 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603006 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.603010 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603014 | orchestrator | 2025-05-19 19:49:30.603017 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.603021 | orchestrator | Monday 19 May 2025 19:47:12 +0000 (0:00:01.286) 0:11:57.834 ************ 2025-05-19 19:49:30.603025 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603028 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603032 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603036 | orchestrator | 2025-05-19 19:49:30.603040 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.603045 | orchestrator | Monday 19 May 2025 19:47:13 +0000 (0:00:00.386) 0:11:58.221 ************ 2025-05-19 19:49:30.603051 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603056 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603062 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603067 | orchestrator | 2025-05-19 19:49:30.603072 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.603078 | orchestrator | Monday 19 May 2025 19:47:13 +0000 (0:00:00.355) 0:11:58.576 ************ 2025-05-19 19:49:30.603087 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.603092 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603097 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.603102 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603108 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.603113 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603119 | orchestrator | 2025-05-19 19:49:30.603124 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.603130 | orchestrator | Monday 19 May 2025 19:47:14 +0000 (0:00:00.479) 0:11:59.055 ************ 2025-05-19 19:49:30.603135 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.603141 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603147 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.603153 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603162 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.603169 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603175 | orchestrator | 2025-05-19 19:49:30.603181 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.603186 | orchestrator | Monday 19 May 2025 19:47:14 +0000 (0:00:00.737) 0:11:59.792 ************ 2025-05-19 19:49:30.603190 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.603194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.603197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.603201 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603205 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.603209 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.603212 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.603216 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603220 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.603223 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.603233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.603239 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603245 | orchestrator | 2025-05-19 19:49:30.603250 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.603256 | orchestrator | Monday 19 May 2025 19:47:15 +0000 (0:00:00.700) 0:12:00.493 ************ 2025-05-19 19:49:30.603262 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603268 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603274 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603280 | orchestrator | 2025-05-19 19:49:30.603286 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.603290 | orchestrator | Monday 19 May 2025 19:47:16 +0000 (0:00:00.866) 0:12:01.359 ************ 2025-05-19 19:49:30.603293 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.603297 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603301 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.603304 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603308 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.603312 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603330 | orchestrator | 2025-05-19 19:49:30.603337 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.603343 | orchestrator | Monday 19 May 2025 19:47:17 +0000 (0:00:00.659) 0:12:02.018 ************ 2025-05-19 19:49:30.603354 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603360 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603365 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603371 | orchestrator | 2025-05-19 19:49:30.603376 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.603381 | orchestrator | Monday 19 May 2025 19:47:18 +0000 (0:00:00.936) 0:12:02.955 ************ 2025-05-19 19:49:30.603387 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603392 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603398 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603403 | orchestrator | 2025-05-19 19:49:30.603409 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-19 19:49:30.603415 | orchestrator | Monday 19 May 2025 19:47:18 +0000 (0:00:00.607) 0:12:03.563 ************ 2025-05-19 19:49:30.603422 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603428 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603434 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-19 19:49:30.603440 | orchestrator | 2025-05-19 19:49:30.603447 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-19 19:49:30.603451 | orchestrator | Monday 19 May 2025 19:47:19 +0000 (0:00:00.446) 0:12:04.010 ************ 2025-05-19 19:49:30.603455 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.603459 | orchestrator | 2025-05-19 19:49:30.603463 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-19 19:49:30.603466 | orchestrator | Monday 19 May 2025 19:47:21 +0000 (0:00:02.133) 0:12:06.143 ************ 2025-05-19 19:49:30.603473 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-19 19:49:30.603479 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603482 | orchestrator | 2025-05-19 19:49:30.603486 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-19 19:49:30.603490 | orchestrator | Monday 19 May 2025 19:47:21 +0000 (0:00:00.371) 0:12:06.515 ************ 2025-05-19 19:49:30.603496 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:49:30.603506 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:49:30.603510 | orchestrator | 2025-05-19 19:49:30.603516 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-19 19:49:30.603520 | orchestrator | Monday 19 May 2025 19:47:28 +0000 (0:00:06.889) 0:12:13.404 ************ 2025-05-19 19:49:30.603524 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 19:49:30.603528 | orchestrator | 2025-05-19 19:49:30.603532 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-19 19:49:30.603535 | orchestrator | Monday 19 May 2025 19:47:31 +0000 (0:00:02.995) 0:12:16.400 ************ 2025-05-19 19:49:30.603539 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.603543 | orchestrator | 2025-05-19 19:49:30.603546 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-19 19:49:30.603550 | orchestrator | Monday 19 May 2025 19:47:32 +0000 (0:00:00.764) 0:12:17.165 ************ 2025-05-19 19:49:30.603554 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 19:49:30.603557 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 19:49:30.603564 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 19:49:30.603568 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-19 19:49:30.603576 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-19 19:49:30.603580 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-19 19:49:30.603584 | orchestrator | 2025-05-19 19:49:30.603588 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-19 19:49:30.603592 | orchestrator | Monday 19 May 2025 19:47:33 +0000 (0:00:01.111) 0:12:18.276 ************ 2025-05-19 19:49:30.603598 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:49:30.603603 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.603609 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 19:49:30.603615 | orchestrator | 2025-05-19 19:49:30.603620 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-19 19:49:30.603626 | orchestrator | Monday 19 May 2025 19:47:35 +0000 (0:00:01.790) 0:12:20.066 ************ 2025-05-19 19:49:30.603633 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 19:49:30.603637 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.603641 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603645 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 19:49:30.603648 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.603652 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603656 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 19:49:30.603659 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.603663 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603667 | orchestrator | 2025-05-19 19:49:30.603671 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-19 19:49:30.603674 | orchestrator | Monday 19 May 2025 19:47:36 +0000 (0:00:01.227) 0:12:21.294 ************ 2025-05-19 19:49:30.603678 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603682 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.603686 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.603689 | orchestrator | 2025-05-19 19:49:30.603693 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-19 19:49:30.603697 | orchestrator | Monday 19 May 2025 19:47:36 +0000 (0:00:00.558) 0:12:21.853 ************ 2025-05-19 19:49:30.603700 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.603704 | orchestrator | 2025-05-19 19:49:30.603708 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-19 19:49:30.603712 | orchestrator | Monday 19 May 2025 19:47:37 +0000 (0:00:00.551) 0:12:22.404 ************ 2025-05-19 19:49:30.603715 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.603719 | orchestrator | 2025-05-19 19:49:30.603723 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-19 19:49:30.603727 | orchestrator | Monday 19 May 2025 19:47:38 +0000 (0:00:00.773) 0:12:23.178 ************ 2025-05-19 19:49:30.603730 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603734 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603738 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603741 | orchestrator | 2025-05-19 19:49:30.603745 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-19 19:49:30.603749 | orchestrator | Monday 19 May 2025 19:47:39 +0000 (0:00:01.281) 0:12:24.460 ************ 2025-05-19 19:49:30.603752 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603756 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603760 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603767 | orchestrator | 2025-05-19 19:49:30.603771 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-19 19:49:30.603774 | orchestrator | Monday 19 May 2025 19:47:40 +0000 (0:00:01.268) 0:12:25.728 ************ 2025-05-19 19:49:30.603778 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603782 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603785 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603789 | orchestrator | 2025-05-19 19:49:30.603793 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-19 19:49:30.603796 | orchestrator | Monday 19 May 2025 19:47:42 +0000 (0:00:01.802) 0:12:27.531 ************ 2025-05-19 19:49:30.603800 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603804 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603807 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603811 | orchestrator | 2025-05-19 19:49:30.603815 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-19 19:49:30.603821 | orchestrator | Monday 19 May 2025 19:47:44 +0000 (0:00:01.882) 0:12:29.413 ************ 2025-05-19 19:49:30.603825 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-19 19:49:30.603829 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-19 19:49:30.603833 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-19 19:49:30.603837 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.603840 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.603844 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.603848 | orchestrator | 2025-05-19 19:49:30.603851 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-19 19:49:30.603855 | orchestrator | Monday 19 May 2025 19:48:01 +0000 (0:00:17.043) 0:12:46.457 ************ 2025-05-19 19:49:30.603859 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603863 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603866 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603870 | orchestrator | 2025-05-19 19:49:30.603874 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-19 19:49:30.603877 | orchestrator | Monday 19 May 2025 19:48:02 +0000 (0:00:00.654) 0:12:47.112 ************ 2025-05-19 19:49:30.603881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.603885 | orchestrator | 2025-05-19 19:49:30.603892 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-19 19:49:30.603895 | orchestrator | Monday 19 May 2025 19:48:02 +0000 (0:00:00.641) 0:12:47.754 ************ 2025-05-19 19:49:30.603899 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.603903 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.603907 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.603910 | orchestrator | 2025-05-19 19:49:30.603916 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-19 19:49:30.603922 | orchestrator | Monday 19 May 2025 19:48:03 +0000 (0:00:00.296) 0:12:48.051 ************ 2025-05-19 19:49:30.603928 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.603934 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.603941 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.603945 | orchestrator | 2025-05-19 19:49:30.603948 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-19 19:49:30.603952 | orchestrator | Monday 19 May 2025 19:48:04 +0000 (0:00:01.192) 0:12:49.244 ************ 2025-05-19 19:49:30.603956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.603959 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.603963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.603967 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.603970 | orchestrator | 2025-05-19 19:49:30.603977 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-19 19:49:30.603981 | orchestrator | Monday 19 May 2025 19:48:05 +0000 (0:00:01.138) 0:12:50.383 ************ 2025-05-19 19:49:30.603985 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.603988 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.603992 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.603996 | orchestrator | 2025-05-19 19:49:30.603999 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.604003 | orchestrator | Monday 19 May 2025 19:48:05 +0000 (0:00:00.331) 0:12:50.714 ************ 2025-05-19 19:49:30.604007 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.604010 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.604014 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.604018 | orchestrator | 2025-05-19 19:49:30.604022 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-19 19:49:30.604025 | orchestrator | 2025-05-19 19:49:30.604029 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-19 19:49:30.604033 | orchestrator | Monday 19 May 2025 19:48:07 +0000 (0:00:02.009) 0:12:52.723 ************ 2025-05-19 19:49:30.604037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.604040 | orchestrator | 2025-05-19 19:49:30.604044 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-19 19:49:30.604048 | orchestrator | Monday 19 May 2025 19:48:08 +0000 (0:00:00.768) 0:12:53.492 ************ 2025-05-19 19:49:30.604051 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604055 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604059 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604062 | orchestrator | 2025-05-19 19:49:30.604066 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-19 19:49:30.604070 | orchestrator | Monday 19 May 2025 19:48:08 +0000 (0:00:00.361) 0:12:53.853 ************ 2025-05-19 19:49:30.604073 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604077 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604081 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604084 | orchestrator | 2025-05-19 19:49:30.604088 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-19 19:49:30.604092 | orchestrator | Monday 19 May 2025 19:48:09 +0000 (0:00:00.854) 0:12:54.708 ************ 2025-05-19 19:49:30.604096 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604099 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604103 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604107 | orchestrator | 2025-05-19 19:49:30.604110 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-19 19:49:30.604114 | orchestrator | Monday 19 May 2025 19:48:10 +0000 (0:00:01.105) 0:12:55.813 ************ 2025-05-19 19:49:30.604118 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604121 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604125 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604129 | orchestrator | 2025-05-19 19:49:30.604132 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-19 19:49:30.604136 | orchestrator | Monday 19 May 2025 19:48:11 +0000 (0:00:00.744) 0:12:56.557 ************ 2025-05-19 19:49:30.604140 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604148 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604152 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604156 | orchestrator | 2025-05-19 19:49:30.604159 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-19 19:49:30.604163 | orchestrator | Monday 19 May 2025 19:48:11 +0000 (0:00:00.325) 0:12:56.883 ************ 2025-05-19 19:49:30.604167 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604171 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604174 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604178 | orchestrator | 2025-05-19 19:49:30.604182 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-19 19:49:30.604188 | orchestrator | Monday 19 May 2025 19:48:12 +0000 (0:00:00.292) 0:12:57.176 ************ 2025-05-19 19:49:30.604192 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604195 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604199 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604203 | orchestrator | 2025-05-19 19:49:30.604207 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-19 19:49:30.604210 | orchestrator | Monday 19 May 2025 19:48:12 +0000 (0:00:00.592) 0:12:57.769 ************ 2025-05-19 19:49:30.604214 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604218 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604221 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604225 | orchestrator | 2025-05-19 19:49:30.604232 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-19 19:49:30.604241 | orchestrator | Monday 19 May 2025 19:48:13 +0000 (0:00:00.335) 0:12:58.104 ************ 2025-05-19 19:49:30.604246 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604252 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604257 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604263 | orchestrator | 2025-05-19 19:49:30.604270 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-19 19:49:30.604276 | orchestrator | Monday 19 May 2025 19:48:13 +0000 (0:00:00.309) 0:12:58.413 ************ 2025-05-19 19:49:30.604282 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604288 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604292 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604296 | orchestrator | 2025-05-19 19:49:30.604299 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-19 19:49:30.604303 | orchestrator | Monday 19 May 2025 19:48:13 +0000 (0:00:00.307) 0:12:58.721 ************ 2025-05-19 19:49:30.604307 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604310 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604350 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604355 | orchestrator | 2025-05-19 19:49:30.604359 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-19 19:49:30.604363 | orchestrator | Monday 19 May 2025 19:48:14 +0000 (0:00:01.098) 0:12:59.820 ************ 2025-05-19 19:49:30.604367 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604370 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604374 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604378 | orchestrator | 2025-05-19 19:49:30.604381 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-19 19:49:30.604385 | orchestrator | Monday 19 May 2025 19:48:15 +0000 (0:00:00.332) 0:13:00.152 ************ 2025-05-19 19:49:30.604389 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604393 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604396 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604400 | orchestrator | 2025-05-19 19:49:30.604403 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-19 19:49:30.604407 | orchestrator | Monday 19 May 2025 19:48:15 +0000 (0:00:00.330) 0:13:00.483 ************ 2025-05-19 19:49:30.604411 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604415 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604418 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604422 | orchestrator | 2025-05-19 19:49:30.604426 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-19 19:49:30.604429 | orchestrator | Monday 19 May 2025 19:48:15 +0000 (0:00:00.354) 0:13:00.837 ************ 2025-05-19 19:49:30.604433 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604437 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604440 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604444 | orchestrator | 2025-05-19 19:49:30.604448 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-19 19:49:30.604455 | orchestrator | Monday 19 May 2025 19:48:16 +0000 (0:00:00.693) 0:13:01.531 ************ 2025-05-19 19:49:30.604459 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604463 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604466 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604470 | orchestrator | 2025-05-19 19:49:30.604474 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-19 19:49:30.604477 | orchestrator | Monday 19 May 2025 19:48:16 +0000 (0:00:00.323) 0:13:01.855 ************ 2025-05-19 19:49:30.604481 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604485 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604488 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604492 | orchestrator | 2025-05-19 19:49:30.604496 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-19 19:49:30.604499 | orchestrator | Monday 19 May 2025 19:48:17 +0000 (0:00:00.347) 0:13:02.203 ************ 2025-05-19 19:49:30.604503 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604507 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604510 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604514 | orchestrator | 2025-05-19 19:49:30.604518 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-19 19:49:30.604521 | orchestrator | Monday 19 May 2025 19:48:17 +0000 (0:00:00.356) 0:13:02.559 ************ 2025-05-19 19:49:30.604525 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604529 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604532 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604536 | orchestrator | 2025-05-19 19:49:30.604540 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-19 19:49:30.604543 | orchestrator | Monday 19 May 2025 19:48:18 +0000 (0:00:00.649) 0:13:03.209 ************ 2025-05-19 19:49:30.604547 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.604551 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.604557 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.604561 | orchestrator | 2025-05-19 19:49:30.604565 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-19 19:49:30.604569 | orchestrator | Monday 19 May 2025 19:48:18 +0000 (0:00:00.354) 0:13:03.564 ************ 2025-05-19 19:49:30.604572 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604576 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604580 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604583 | orchestrator | 2025-05-19 19:49:30.604587 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 19:49:30.604591 | orchestrator | Monday 19 May 2025 19:48:18 +0000 (0:00:00.346) 0:13:03.910 ************ 2025-05-19 19:49:30.604595 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604598 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604602 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604606 | orchestrator | 2025-05-19 19:49:30.604609 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-19 19:49:30.604613 | orchestrator | Monday 19 May 2025 19:48:19 +0000 (0:00:00.348) 0:13:04.258 ************ 2025-05-19 19:49:30.604617 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604620 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604624 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604628 | orchestrator | 2025-05-19 19:49:30.604631 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-19 19:49:30.604638 | orchestrator | Monday 19 May 2025 19:48:19 +0000 (0:00:00.679) 0:13:04.938 ************ 2025-05-19 19:49:30.604642 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604646 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604650 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604653 | orchestrator | 2025-05-19 19:49:30.604657 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-19 19:49:30.604661 | orchestrator | Monday 19 May 2025 19:48:20 +0000 (0:00:00.402) 0:13:05.340 ************ 2025-05-19 19:49:30.604667 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604671 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604675 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604679 | orchestrator | 2025-05-19 19:49:30.604683 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-19 19:49:30.604686 | orchestrator | Monday 19 May 2025 19:48:20 +0000 (0:00:00.347) 0:13:05.688 ************ 2025-05-19 19:49:30.604690 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604694 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604697 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604701 | orchestrator | 2025-05-19 19:49:30.604705 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-19 19:49:30.604709 | orchestrator | Monday 19 May 2025 19:48:21 +0000 (0:00:00.343) 0:13:06.032 ************ 2025-05-19 19:49:30.604712 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604716 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604720 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604723 | orchestrator | 2025-05-19 19:49:30.604727 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 19:49:30.604731 | orchestrator | Monday 19 May 2025 19:48:21 +0000 (0:00:00.659) 0:13:06.691 ************ 2025-05-19 19:49:30.604735 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604738 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604742 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604746 | orchestrator | 2025-05-19 19:49:30.604750 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 19:49:30.604753 | orchestrator | Monday 19 May 2025 19:48:22 +0000 (0:00:00.350) 0:13:07.042 ************ 2025-05-19 19:49:30.604757 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604761 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604765 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604768 | orchestrator | 2025-05-19 19:49:30.604772 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 19:49:30.604776 | orchestrator | Monday 19 May 2025 19:48:22 +0000 (0:00:00.381) 0:13:07.424 ************ 2025-05-19 19:49:30.604779 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604783 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604787 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604794 | orchestrator | 2025-05-19 19:49:30.604800 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 19:49:30.604806 | orchestrator | Monday 19 May 2025 19:48:22 +0000 (0:00:00.365) 0:13:07.789 ************ 2025-05-19 19:49:30.604812 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604817 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604821 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604825 | orchestrator | 2025-05-19 19:49:30.604828 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-19 19:49:30.604832 | orchestrator | Monday 19 May 2025 19:48:23 +0000 (0:00:00.715) 0:13:08.504 ************ 2025-05-19 19:49:30.604836 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604840 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604843 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604847 | orchestrator | 2025-05-19 19:49:30.604851 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-19 19:49:30.604854 | orchestrator | Monday 19 May 2025 19:48:23 +0000 (0:00:00.363) 0:13:08.868 ************ 2025-05-19 19:49:30.604858 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.604862 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-19 19:49:30.604865 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604869 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.604873 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-19 19:49:30.604879 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604883 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.604887 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-19 19:49:30.604890 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604894 | orchestrator | 2025-05-19 19:49:30.604898 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-19 19:49:30.604904 | orchestrator | Monday 19 May 2025 19:48:24 +0000 (0:00:00.426) 0:13:09.295 ************ 2025-05-19 19:49:30.604908 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-19 19:49:30.604911 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-19 19:49:30.604915 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604919 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-19 19:49:30.604922 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-19 19:49:30.604926 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604930 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-19 19:49:30.604934 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-19 19:49:30.604937 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604941 | orchestrator | 2025-05-19 19:49:30.604945 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-19 19:49:30.604948 | orchestrator | Monday 19 May 2025 19:48:24 +0000 (0:00:00.348) 0:13:09.643 ************ 2025-05-19 19:49:30.604952 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604956 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604959 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604963 | orchestrator | 2025-05-19 19:49:30.604967 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-19 19:49:30.604974 | orchestrator | Monday 19 May 2025 19:48:25 +0000 (0:00:00.726) 0:13:10.370 ************ 2025-05-19 19:49:30.604977 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.604981 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.604985 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.604988 | orchestrator | 2025-05-19 19:49:30.604992 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:49:30.604996 | orchestrator | Monday 19 May 2025 19:48:25 +0000 (0:00:00.354) 0:13:10.725 ************ 2025-05-19 19:49:30.605000 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605003 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605007 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605011 | orchestrator | 2025-05-19 19:49:30.605014 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:49:30.605018 | orchestrator | Monday 19 May 2025 19:48:26 +0000 (0:00:00.361) 0:13:11.086 ************ 2025-05-19 19:49:30.605022 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605026 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605029 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605033 | orchestrator | 2025-05-19 19:49:30.605037 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:49:30.605040 | orchestrator | Monday 19 May 2025 19:48:26 +0000 (0:00:00.330) 0:13:11.416 ************ 2025-05-19 19:49:30.605044 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605048 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605051 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605055 | orchestrator | 2025-05-19 19:49:30.605059 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:49:30.605063 | orchestrator | Monday 19 May 2025 19:48:27 +0000 (0:00:00.752) 0:13:12.168 ************ 2025-05-19 19:49:30.605066 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605070 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605074 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605080 | orchestrator | 2025-05-19 19:49:30.605084 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:49:30.605088 | orchestrator | Monday 19 May 2025 19:48:27 +0000 (0:00:00.347) 0:13:12.516 ************ 2025-05-19 19:49:30.605092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.605095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.605099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.605103 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605106 | orchestrator | 2025-05-19 19:49:30.605110 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:49:30.605114 | orchestrator | Monday 19 May 2025 19:48:28 +0000 (0:00:00.456) 0:13:12.972 ************ 2025-05-19 19:49:30.605117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.605121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.605125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.605128 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605132 | orchestrator | 2025-05-19 19:49:30.605136 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:49:30.605140 | orchestrator | Monday 19 May 2025 19:48:28 +0000 (0:00:00.429) 0:13:13.402 ************ 2025-05-19 19:49:30.605143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.605147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.605151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.605154 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605158 | orchestrator | 2025-05-19 19:49:30.605162 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.605165 | orchestrator | Monday 19 May 2025 19:48:28 +0000 (0:00:00.436) 0:13:13.838 ************ 2025-05-19 19:49:30.605169 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605173 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605176 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605180 | orchestrator | 2025-05-19 19:49:30.605184 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:49:30.605187 | orchestrator | Monday 19 May 2025 19:48:29 +0000 (0:00:00.350) 0:13:14.188 ************ 2025-05-19 19:49:30.605191 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.605195 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605198 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.605205 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605208 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.605212 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605216 | orchestrator | 2025-05-19 19:49:30.605220 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:49:30.605223 | orchestrator | Monday 19 May 2025 19:48:30 +0000 (0:00:00.862) 0:13:15.051 ************ 2025-05-19 19:49:30.605229 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605235 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605241 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605247 | orchestrator | 2025-05-19 19:49:30.605253 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:49:30.605259 | orchestrator | Monday 19 May 2025 19:48:30 +0000 (0:00:00.354) 0:13:15.406 ************ 2025-05-19 19:49:30.605264 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605270 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605276 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605283 | orchestrator | 2025-05-19 19:49:30.605288 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:49:30.605294 | orchestrator | Monday 19 May 2025 19:48:30 +0000 (0:00:00.350) 0:13:15.756 ************ 2025-05-19 19:49:30.605305 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:49:30.605311 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605332 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:49:30.605342 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605346 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:49:30.605350 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605354 | orchestrator | 2025-05-19 19:49:30.605357 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:49:30.605361 | orchestrator | Monday 19 May 2025 19:48:31 +0000 (0:00:00.454) 0:13:16.210 ************ 2025-05-19 19:49:30.605365 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.605369 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605373 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.605376 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605380 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:49:30.605384 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605387 | orchestrator | 2025-05-19 19:49:30.605391 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:49:30.605395 | orchestrator | Monday 19 May 2025 19:48:31 +0000 (0:00:00.734) 0:13:16.944 ************ 2025-05-19 19:49:30.605399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.605402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.605406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.605410 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605413 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:49:30.605417 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:49:30.605421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:49:30.605424 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605428 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:49:30.605432 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:49:30.605435 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:49:30.605439 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605442 | orchestrator | 2025-05-19 19:49:30.605446 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-19 19:49:30.605450 | orchestrator | Monday 19 May 2025 19:48:32 +0000 (0:00:00.619) 0:13:17.564 ************ 2025-05-19 19:49:30.605454 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605457 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605461 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605465 | orchestrator | 2025-05-19 19:49:30.605468 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-19 19:49:30.605472 | orchestrator | Monday 19 May 2025 19:48:33 +0000 (0:00:00.926) 0:13:18.490 ************ 2025-05-19 19:49:30.605476 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.605479 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605483 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.605487 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605490 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.605494 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605498 | orchestrator | 2025-05-19 19:49:30.605501 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-19 19:49:30.605505 | orchestrator | Monday 19 May 2025 19:48:34 +0000 (0:00:00.576) 0:13:19.066 ************ 2025-05-19 19:49:30.605515 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605521 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605527 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605533 | orchestrator | 2025-05-19 19:49:30.605540 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-19 19:49:30.605543 | orchestrator | Monday 19 May 2025 19:48:34 +0000 (0:00:00.869) 0:13:19.936 ************ 2025-05-19 19:49:30.605547 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605551 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605554 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605558 | orchestrator | 2025-05-19 19:49:30.605562 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-19 19:49:30.605565 | orchestrator | Monday 19 May 2025 19:48:35 +0000 (0:00:00.604) 0:13:20.540 ************ 2025-05-19 19:49:30.605572 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.605576 | orchestrator | 2025-05-19 19:49:30.605579 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-19 19:49:30.605583 | orchestrator | Monday 19 May 2025 19:48:36 +0000 (0:00:00.901) 0:13:21.441 ************ 2025-05-19 19:49:30.605587 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-19 19:49:30.605590 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-19 19:49:30.605594 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-19 19:49:30.605598 | orchestrator | 2025-05-19 19:49:30.605601 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-19 19:49:30.605605 | orchestrator | Monday 19 May 2025 19:48:37 +0000 (0:00:00.724) 0:13:22.165 ************ 2025-05-19 19:49:30.605609 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:49:30.605612 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.605616 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 19:49:30.605620 | orchestrator | 2025-05-19 19:49:30.605623 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-19 19:49:30.605627 | orchestrator | Monday 19 May 2025 19:48:39 +0000 (0:00:01.973) 0:13:24.138 ************ 2025-05-19 19:49:30.605633 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 19:49:30.605637 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 19:49:30.605641 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.605645 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 19:49:30.605648 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 19:49:30.605652 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.605655 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 19:49:30.605659 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 19:49:30.605663 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.605666 | orchestrator | 2025-05-19 19:49:30.605670 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-19 19:49:30.605674 | orchestrator | Monday 19 May 2025 19:48:40 +0000 (0:00:01.588) 0:13:25.727 ************ 2025-05-19 19:49:30.605677 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605681 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605685 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605688 | orchestrator | 2025-05-19 19:49:30.605692 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-19 19:49:30.605696 | orchestrator | Monday 19 May 2025 19:48:41 +0000 (0:00:00.375) 0:13:26.102 ************ 2025-05-19 19:49:30.605699 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605707 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605710 | orchestrator | 2025-05-19 19:49:30.605714 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-19 19:49:30.605718 | orchestrator | Monday 19 May 2025 19:48:41 +0000 (0:00:00.359) 0:13:26.462 ************ 2025-05-19 19:49:30.605727 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-19 19:49:30.605730 | orchestrator | 2025-05-19 19:49:30.605734 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-19 19:49:30.605738 | orchestrator | Monday 19 May 2025 19:48:41 +0000 (0:00:00.246) 0:13:26.709 ************ 2025-05-19 19:49:30.605742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605761 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605765 | orchestrator | 2025-05-19 19:49:30.605768 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-19 19:49:30.605772 | orchestrator | Monday 19 May 2025 19:48:43 +0000 (0:00:01.321) 0:13:28.030 ************ 2025-05-19 19:49:30.605776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605794 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605798 | orchestrator | 2025-05-19 19:49:30.605802 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-19 19:49:30.605808 | orchestrator | Monday 19 May 2025 19:48:43 +0000 (0:00:00.713) 0:13:28.744 ************ 2025-05-19 19:49:30.605812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 19:49:30.605830 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605834 | orchestrator | 2025-05-19 19:49:30.605838 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-19 19:49:30.605841 | orchestrator | Monday 19 May 2025 19:48:44 +0000 (0:00:00.644) 0:13:29.389 ************ 2025-05-19 19:49:30.605847 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 19:49:30.605856 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 19:49:30.605860 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 19:49:30.605864 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 19:49:30.605868 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 19:49:30.605871 | orchestrator | 2025-05-19 19:49:30.605875 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-19 19:49:30.605879 | orchestrator | Monday 19 May 2025 19:49:11 +0000 (0:00:26.732) 0:13:56.122 ************ 2025-05-19 19:49:30.605883 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605886 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605890 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605894 | orchestrator | 2025-05-19 19:49:30.605897 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-19 19:49:30.605901 | orchestrator | Monday 19 May 2025 19:49:11 +0000 (0:00:00.548) 0:13:56.670 ************ 2025-05-19 19:49:30.605905 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.605909 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.605912 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.605916 | orchestrator | 2025-05-19 19:49:30.605920 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-19 19:49:30.605923 | orchestrator | Monday 19 May 2025 19:49:12 +0000 (0:00:00.344) 0:13:57.015 ************ 2025-05-19 19:49:30.605927 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.605931 | orchestrator | 2025-05-19 19:49:30.605935 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-19 19:49:30.605938 | orchestrator | Monday 19 May 2025 19:49:12 +0000 (0:00:00.600) 0:13:57.616 ************ 2025-05-19 19:49:30.605942 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.605946 | orchestrator | 2025-05-19 19:49:30.605949 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-19 19:49:30.605953 | orchestrator | Monday 19 May 2025 19:49:13 +0000 (0:00:00.908) 0:13:58.524 ************ 2025-05-19 19:49:30.605957 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.605961 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.605964 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.605968 | orchestrator | 2025-05-19 19:49:30.605972 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-19 19:49:30.605975 | orchestrator | Monday 19 May 2025 19:49:14 +0000 (0:00:01.289) 0:13:59.813 ************ 2025-05-19 19:49:30.605979 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.605983 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.605986 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.605990 | orchestrator | 2025-05-19 19:49:30.605994 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-19 19:49:30.605997 | orchestrator | Monday 19 May 2025 19:49:16 +0000 (0:00:01.235) 0:14:01.049 ************ 2025-05-19 19:49:30.606001 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.606005 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.606008 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.606035 | orchestrator | 2025-05-19 19:49:30.606040 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-19 19:49:30.606043 | orchestrator | Monday 19 May 2025 19:49:18 +0000 (0:00:02.177) 0:14:03.226 ************ 2025-05-19 19:49:30.606059 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.606065 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.606069 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 19:49:30.606073 | orchestrator | 2025-05-19 19:49:30.606077 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-19 19:49:30.606081 | orchestrator | Monday 19 May 2025 19:49:20 +0000 (0:00:02.013) 0:14:05.240 ************ 2025-05-19 19:49:30.606084 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.606088 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:49:30.606092 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:49:30.606095 | orchestrator | 2025-05-19 19:49:30.606099 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-19 19:49:30.606103 | orchestrator | Monday 19 May 2025 19:49:21 +0000 (0:00:01.254) 0:14:06.494 ************ 2025-05-19 19:49:30.606106 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.606110 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.606114 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.606118 | orchestrator | 2025-05-19 19:49:30.606121 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-19 19:49:30.606125 | orchestrator | Monday 19 May 2025 19:49:22 +0000 (0:00:00.782) 0:14:07.277 ************ 2025-05-19 19:49:30.606132 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:49:30.606136 | orchestrator | 2025-05-19 19:49:30.606140 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-19 19:49:30.606144 | orchestrator | Monday 19 May 2025 19:49:23 +0000 (0:00:00.982) 0:14:08.260 ************ 2025-05-19 19:49:30.606147 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.606151 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.606155 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.606158 | orchestrator | 2025-05-19 19:49:30.606162 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-19 19:49:30.606166 | orchestrator | Monday 19 May 2025 19:49:23 +0000 (0:00:00.463) 0:14:08.723 ************ 2025-05-19 19:49:30.606169 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.606173 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.606177 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.606181 | orchestrator | 2025-05-19 19:49:30.606184 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-19 19:49:30.606188 | orchestrator | Monday 19 May 2025 19:49:25 +0000 (0:00:01.690) 0:14:10.414 ************ 2025-05-19 19:49:30.606192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:49:30.606195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:49:30.606199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:49:30.606203 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:49:30.606207 | orchestrator | 2025-05-19 19:49:30.606210 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-19 19:49:30.606214 | orchestrator | Monday 19 May 2025 19:49:26 +0000 (0:00:00.764) 0:14:11.179 ************ 2025-05-19 19:49:30.606218 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:49:30.606221 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:49:30.606226 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:49:30.606232 | orchestrator | 2025-05-19 19:49:30.606238 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-19 19:49:30.606245 | orchestrator | Monday 19 May 2025 19:49:26 +0000 (0:00:00.390) 0:14:11.569 ************ 2025-05-19 19:49:30.606250 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:49:30.606253 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:49:30.606262 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:49:30.606266 | orchestrator | 2025-05-19 19:49:30.606269 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:49:30.606273 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-19 19:49:30.606278 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-19 19:49:30.606282 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-19 19:49:30.606286 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-19 19:49:30.606290 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-19 19:49:30.606293 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-19 19:49:30.606297 | orchestrator | 2025-05-19 19:49:30.606301 | orchestrator | 2025-05-19 19:49:30.606304 | orchestrator | 2025-05-19 19:49:30.606308 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:49:30.606312 | orchestrator | Monday 19 May 2025 19:49:28 +0000 (0:00:01.521) 0:14:13.091 ************ 2025-05-19 19:49:30.606328 | orchestrator | =============================================================================== 2025-05-19 19:49:30.606332 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 91.53s 2025-05-19 19:49:30.606336 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 40.51s 2025-05-19 19:49:30.606339 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 26.73s 2025-05-19 19:49:30.606345 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.62s 2025-05-19 19:49:30.606349 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.04s 2025-05-19 19:49:30.606353 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.44s 2025-05-19 19:49:30.606357 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.67s 2025-05-19 19:49:30.606360 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 8.44s 2025-05-19 19:49:30.606364 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.27s 2025-05-19 19:49:30.606368 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.89s 2025-05-19 19:49:30.606372 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.73s 2025-05-19 19:49:30.606375 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.55s 2025-05-19 19:49:30.606379 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.29s 2025-05-19 19:49:30.606383 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 4.92s 2025-05-19 19:49:30.606386 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 4.67s 2025-05-19 19:49:30.606393 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 4.46s 2025-05-19 19:49:30.606397 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.37s 2025-05-19 19:49:30.606400 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 4.01s 2025-05-19 19:49:30.606404 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.32s 2025-05-19 19:49:30.606408 | orchestrator | ceph-osd : apply operating system tuning -------------------------------- 3.14s 2025-05-19 19:49:30.606411 | orchestrator | 2025-05-19 19:49:30 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:30.606422 | orchestrator | 2025-05-19 19:49:30 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:30.606426 | orchestrator | 2025-05-19 19:49:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:33.587561 | orchestrator | 2025-05-19 19:49:33 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:33.587786 | orchestrator | 2025-05-19 19:49:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:33.588912 | orchestrator | 2025-05-19 19:49:33 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:33.592413 | orchestrator | 2025-05-19 19:49:33 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:33.592468 | orchestrator | 2025-05-19 19:49:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:36.634983 | orchestrator | 2025-05-19 19:49:36 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:36.635801 | orchestrator | 2025-05-19 19:49:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:36.640474 | orchestrator | 2025-05-19 19:49:36 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:36.640524 | orchestrator | 2025-05-19 19:49:36 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:36.640535 | orchestrator | 2025-05-19 19:49:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:39.673259 | orchestrator | 2025-05-19 19:49:39 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:39.676014 | orchestrator | 2025-05-19 19:49:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:39.677002 | orchestrator | 2025-05-19 19:49:39 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:39.678412 | orchestrator | 2025-05-19 19:49:39 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:39.678569 | orchestrator | 2025-05-19 19:49:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:42.721687 | orchestrator | 2025-05-19 19:49:42 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:42.721796 | orchestrator | 2025-05-19 19:49:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:42.722361 | orchestrator | 2025-05-19 19:49:42 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:42.723030 | orchestrator | 2025-05-19 19:49:42 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:42.723071 | orchestrator | 2025-05-19 19:49:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:45.762349 | orchestrator | 2025-05-19 19:49:45 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:45.763580 | orchestrator | 2025-05-19 19:49:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:45.763626 | orchestrator | 2025-05-19 19:49:45 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:45.763909 | orchestrator | 2025-05-19 19:49:45 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:45.763985 | orchestrator | 2025-05-19 19:49:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:48.821556 | orchestrator | 2025-05-19 19:49:48 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:48.822976 | orchestrator | 2025-05-19 19:49:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:48.825107 | orchestrator | 2025-05-19 19:49:48 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:48.829070 | orchestrator | 2025-05-19 19:49:48 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:48.829180 | orchestrator | 2025-05-19 19:49:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:51.877628 | orchestrator | 2025-05-19 19:49:51 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:51.879684 | orchestrator | 2025-05-19 19:49:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:51.879973 | orchestrator | 2025-05-19 19:49:51 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:51.882356 | orchestrator | 2025-05-19 19:49:51 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:51.882405 | orchestrator | 2025-05-19 19:49:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:54.925346 | orchestrator | 2025-05-19 19:49:54 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:54.930238 | orchestrator | 2025-05-19 19:49:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:54.933019 | orchestrator | 2025-05-19 19:49:54 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:54.936526 | orchestrator | 2025-05-19 19:49:54 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:54.936819 | orchestrator | 2025-05-19 19:49:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:49:57.983737 | orchestrator | 2025-05-19 19:49:57 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:49:57.984174 | orchestrator | 2025-05-19 19:49:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:49:57.985333 | orchestrator | 2025-05-19 19:49:57 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:49:57.987330 | orchestrator | 2025-05-19 19:49:57 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:49:57.987361 | orchestrator | 2025-05-19 19:49:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:01.034692 | orchestrator | 2025-05-19 19:50:01 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:01.036682 | orchestrator | 2025-05-19 19:50:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:01.038323 | orchestrator | 2025-05-19 19:50:01 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:01.039525 | orchestrator | 2025-05-19 19:50:01 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:01.039550 | orchestrator | 2025-05-19 19:50:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:04.089036 | orchestrator | 2025-05-19 19:50:04 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:04.090317 | orchestrator | 2025-05-19 19:50:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:04.091334 | orchestrator | 2025-05-19 19:50:04 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:04.092433 | orchestrator | 2025-05-19 19:50:04 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:04.092475 | orchestrator | 2025-05-19 19:50:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:07.141756 | orchestrator | 2025-05-19 19:50:07 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:07.143059 | orchestrator | 2025-05-19 19:50:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:07.144002 | orchestrator | 2025-05-19 19:50:07 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:07.144756 | orchestrator | 2025-05-19 19:50:07 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:07.144784 | orchestrator | 2025-05-19 19:50:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:10.183700 | orchestrator | 2025-05-19 19:50:10 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:10.184840 | orchestrator | 2025-05-19 19:50:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:10.187497 | orchestrator | 2025-05-19 19:50:10 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:10.188752 | orchestrator | 2025-05-19 19:50:10 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:10.188959 | orchestrator | 2025-05-19 19:50:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:13.255024 | orchestrator | 2025-05-19 19:50:13 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:13.256709 | orchestrator | 2025-05-19 19:50:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:13.258436 | orchestrator | 2025-05-19 19:50:13 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:13.261733 | orchestrator | 2025-05-19 19:50:13 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:13.261759 | orchestrator | 2025-05-19 19:50:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:16.316010 | orchestrator | 2025-05-19 19:50:16 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:16.317547 | orchestrator | 2025-05-19 19:50:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:16.320024 | orchestrator | 2025-05-19 19:50:16 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:16.321934 | orchestrator | 2025-05-19 19:50:16 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:16.322005 | orchestrator | 2025-05-19 19:50:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:19.368955 | orchestrator | 2025-05-19 19:50:19 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:19.370367 | orchestrator | 2025-05-19 19:50:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:19.372104 | orchestrator | 2025-05-19 19:50:19 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:19.373126 | orchestrator | 2025-05-19 19:50:19 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:19.373269 | orchestrator | 2025-05-19 19:50:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:22.420901 | orchestrator | 2025-05-19 19:50:22 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:22.424285 | orchestrator | 2025-05-19 19:50:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:22.425612 | orchestrator | 2025-05-19 19:50:22 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:22.428453 | orchestrator | 2025-05-19 19:50:22 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:22.428692 | orchestrator | 2025-05-19 19:50:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:25.486771 | orchestrator | 2025-05-19 19:50:25 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:25.487555 | orchestrator | 2025-05-19 19:50:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:25.490495 | orchestrator | 2025-05-19 19:50:25 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:25.491019 | orchestrator | 2025-05-19 19:50:25 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:25.491051 | orchestrator | 2025-05-19 19:50:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:28.546548 | orchestrator | 2025-05-19 19:50:28 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:28.547681 | orchestrator | 2025-05-19 19:50:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:28.552650 | orchestrator | 2025-05-19 19:50:28 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:28.555545 | orchestrator | 2025-05-19 19:50:28 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:28.555791 | orchestrator | 2025-05-19 19:50:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:31.609599 | orchestrator | 2025-05-19 19:50:31 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:31.613641 | orchestrator | 2025-05-19 19:50:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:31.617635 | orchestrator | 2025-05-19 19:50:31 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:31.618956 | orchestrator | 2025-05-19 19:50:31 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:31.619017 | orchestrator | 2025-05-19 19:50:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:34.678786 | orchestrator | 2025-05-19 19:50:34 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:34.679966 | orchestrator | 2025-05-19 19:50:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:34.681676 | orchestrator | 2025-05-19 19:50:34 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:34.683208 | orchestrator | 2025-05-19 19:50:34 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:34.683259 | orchestrator | 2025-05-19 19:50:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:37.741189 | orchestrator | 2025-05-19 19:50:37 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:37.742221 | orchestrator | 2025-05-19 19:50:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:37.743252 | orchestrator | 2025-05-19 19:50:37 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:37.744450 | orchestrator | 2025-05-19 19:50:37 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:37.744502 | orchestrator | 2025-05-19 19:50:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:40.794794 | orchestrator | 2025-05-19 19:50:40 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:40.795408 | orchestrator | 2025-05-19 19:50:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:40.796405 | orchestrator | 2025-05-19 19:50:40 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:40.797630 | orchestrator | 2025-05-19 19:50:40 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:40.797665 | orchestrator | 2025-05-19 19:50:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:43.857880 | orchestrator | 2025-05-19 19:50:43 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state STARTED 2025-05-19 19:50:43.862892 | orchestrator | 2025-05-19 19:50:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:43.864866 | orchestrator | 2025-05-19 19:50:43 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:43.866558 | orchestrator | 2025-05-19 19:50:43 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:43.866606 | orchestrator | 2025-05-19 19:50:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:46.926387 | orchestrator | 2025-05-19 19:50:46 | INFO  | Task bc189bb7-a879-4bf6-b683-7c33e810b23e is in state SUCCESS 2025-05-19 19:50:46.927230 | orchestrator | 2025-05-19 19:50:46.927260 | orchestrator | 2025-05-19 19:50:46.927269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:50:46.927277 | orchestrator | 2025-05-19 19:50:46.927284 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:50:46.927292 | orchestrator | Monday 19 May 2025 19:49:05 +0000 (0:00:00.309) 0:00:00.309 ************ 2025-05-19 19:50:46.927300 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.927309 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.927316 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.927324 | orchestrator | 2025-05-19 19:50:46.927331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:50:46.927338 | orchestrator | Monday 19 May 2025 19:49:06 +0000 (0:00:00.419) 0:00:00.729 ************ 2025-05-19 19:50:46.927345 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-19 19:50:46.927353 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-19 19:50:46.927359 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-19 19:50:46.927365 | orchestrator | 2025-05-19 19:50:46.927371 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-19 19:50:46.927378 | orchestrator | 2025-05-19 19:50:46.927397 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 19:50:46.927404 | orchestrator | Monday 19 May 2025 19:49:06 +0000 (0:00:00.333) 0:00:01.062 ************ 2025-05-19 19:50:46.927410 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:50:46.927418 | orchestrator | 2025-05-19 19:50:46.927424 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-19 19:50:46.927430 | orchestrator | Monday 19 May 2025 19:49:07 +0000 (0:00:00.738) 0:00:01.801 ************ 2025-05-19 19:50:46.927443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.927488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.927498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.927510 | orchestrator | 2025-05-19 19:50:46.927516 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-19 19:50:46.927523 | orchestrator | Monday 19 May 2025 19:49:08 +0000 (0:00:01.645) 0:00:03.446 ************ 2025-05-19 19:50:46.927529 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.927535 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.927541 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.927547 | orchestrator | 2025-05-19 19:50:46.927553 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 19:50:46.927560 | orchestrator | Monday 19 May 2025 19:49:09 +0000 (0:00:00.289) 0:00:03.735 ************ 2025-05-19 19:50:46.927569 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 19:50:46.927576 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 19:50:46.927583 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 19:50:46.927589 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 19:50:46.927595 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 19:50:46.927766 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-19 19:50:46.927777 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 19:50:46.927784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 19:50:46.927790 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 19:50:46.927800 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 19:50:46.927807 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 19:50:46.927813 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 19:50:46.927819 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-19 19:50:46.927825 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 19:50:46.927832 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 19:50:46.927844 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 19:50:46.927850 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 19:50:46.927856 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 19:50:46.927862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 19:50:46.927869 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-19 19:50:46.927875 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 19:50:46.927882 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-19 19:50:46.927890 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-19 19:50:46.927897 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-19 19:50:46.927903 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-19 19:50:46.927909 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-19 19:50:46.927916 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-19 19:50:46.927922 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-19 19:50:46.927929 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-19 19:50:46.927935 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-19 19:50:46.927941 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-19 19:50:46.927947 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-19 19:50:46.927953 | orchestrator | 2025-05-19 19:50:46.927960 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.927966 | orchestrator | Monday 19 May 2025 19:49:10 +0000 (0:00:01.055) 0:00:04.791 ************ 2025-05-19 19:50:46.927972 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.927979 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.927985 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.927991 | orchestrator | 2025-05-19 19:50:46.927997 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928004 | orchestrator | Monday 19 May 2025 19:49:10 +0000 (0:00:00.436) 0:00:05.228 ************ 2025-05-19 19:50:46.928010 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928017 | orchestrator | 2025-05-19 19:50:46.928027 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928034 | orchestrator | Monday 19 May 2025 19:49:10 +0000 (0:00:00.127) 0:00:05.355 ************ 2025-05-19 19:50:46.928040 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928047 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928053 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928059 | orchestrator | 2025-05-19 19:50:46.928065 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928076 | orchestrator | Monday 19 May 2025 19:49:11 +0000 (0:00:00.487) 0:00:05.843 ************ 2025-05-19 19:50:46.928083 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928136 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928144 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928150 | orchestrator | 2025-05-19 19:50:46.928156 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928162 | orchestrator | Monday 19 May 2025 19:49:11 +0000 (0:00:00.292) 0:00:06.135 ************ 2025-05-19 19:50:46.928168 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928175 | orchestrator | 2025-05-19 19:50:46.928181 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928187 | orchestrator | Monday 19 May 2025 19:49:11 +0000 (0:00:00.115) 0:00:06.251 ************ 2025-05-19 19:50:46.928194 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928200 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928206 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928212 | orchestrator | 2025-05-19 19:50:46.928218 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928225 | orchestrator | Monday 19 May 2025 19:49:12 +0000 (0:00:00.636) 0:00:06.888 ************ 2025-05-19 19:50:46.928231 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928237 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928243 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928249 | orchestrator | 2025-05-19 19:50:46.928255 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928262 | orchestrator | Monday 19 May 2025 19:49:12 +0000 (0:00:00.462) 0:00:07.350 ************ 2025-05-19 19:50:46.928268 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928274 | orchestrator | 2025-05-19 19:50:46.928280 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928286 | orchestrator | Monday 19 May 2025 19:49:13 +0000 (0:00:00.132) 0:00:07.483 ************ 2025-05-19 19:50:46.928292 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928298 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928331 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928338 | orchestrator | 2025-05-19 19:50:46.928344 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928350 | orchestrator | Monday 19 May 2025 19:49:13 +0000 (0:00:00.507) 0:00:07.990 ************ 2025-05-19 19:50:46.928356 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928362 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928368 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928375 | orchestrator | 2025-05-19 19:50:46.928381 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928387 | orchestrator | Monday 19 May 2025 19:49:14 +0000 (0:00:00.497) 0:00:08.487 ************ 2025-05-19 19:50:46.928393 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928400 | orchestrator | 2025-05-19 19:50:46.928407 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928414 | orchestrator | Monday 19 May 2025 19:49:14 +0000 (0:00:00.117) 0:00:08.605 ************ 2025-05-19 19:50:46.928421 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928428 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928435 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928442 | orchestrator | 2025-05-19 19:50:46.928449 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928456 | orchestrator | Monday 19 May 2025 19:49:14 +0000 (0:00:00.438) 0:00:09.043 ************ 2025-05-19 19:50:46.928463 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928470 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928477 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928484 | orchestrator | 2025-05-19 19:50:46.928491 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928503 | orchestrator | Monday 19 May 2025 19:49:14 +0000 (0:00:00.293) 0:00:09.337 ************ 2025-05-19 19:50:46.928511 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928518 | orchestrator | 2025-05-19 19:50:46.928525 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928532 | orchestrator | Monday 19 May 2025 19:49:15 +0000 (0:00:00.300) 0:00:09.638 ************ 2025-05-19 19:50:46.928539 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928546 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928553 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928559 | orchestrator | 2025-05-19 19:50:46.928566 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928573 | orchestrator | Monday 19 May 2025 19:49:15 +0000 (0:00:00.342) 0:00:09.981 ************ 2025-05-19 19:50:46.928580 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928587 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928594 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928601 | orchestrator | 2025-05-19 19:50:46.928608 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928615 | orchestrator | Monday 19 May 2025 19:49:16 +0000 (0:00:00.498) 0:00:10.479 ************ 2025-05-19 19:50:46.928622 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928630 | orchestrator | 2025-05-19 19:50:46.928636 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928643 | orchestrator | Monday 19 May 2025 19:49:16 +0000 (0:00:00.110) 0:00:10.589 ************ 2025-05-19 19:50:46.928650 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928657 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928664 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928671 | orchestrator | 2025-05-19 19:50:46.928678 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928685 | orchestrator | Monday 19 May 2025 19:49:16 +0000 (0:00:00.571) 0:00:11.160 ************ 2025-05-19 19:50:46.928697 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928705 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928711 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928717 | orchestrator | 2025-05-19 19:50:46.928723 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928729 | orchestrator | Monday 19 May 2025 19:49:17 +0000 (0:00:00.448) 0:00:11.609 ************ 2025-05-19 19:50:46.928735 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928741 | orchestrator | 2025-05-19 19:50:46.928747 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928754 | orchestrator | Monday 19 May 2025 19:49:17 +0000 (0:00:00.142) 0:00:11.751 ************ 2025-05-19 19:50:46.928760 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928766 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928772 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928778 | orchestrator | 2025-05-19 19:50:46.928784 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928790 | orchestrator | Monday 19 May 2025 19:49:17 +0000 (0:00:00.607) 0:00:12.359 ************ 2025-05-19 19:50:46.928796 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928806 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928812 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928818 | orchestrator | 2025-05-19 19:50:46.928824 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928831 | orchestrator | Monday 19 May 2025 19:49:18 +0000 (0:00:00.422) 0:00:12.782 ************ 2025-05-19 19:50:46.928837 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928843 | orchestrator | 2025-05-19 19:50:46.928849 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928855 | orchestrator | Monday 19 May 2025 19:49:18 +0000 (0:00:00.277) 0:00:13.059 ************ 2025-05-19 19:50:46.928861 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928871 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928877 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928883 | orchestrator | 2025-05-19 19:50:46.928890 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928896 | orchestrator | Monday 19 May 2025 19:49:18 +0000 (0:00:00.279) 0:00:13.339 ************ 2025-05-19 19:50:46.928902 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.928908 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.928914 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.928920 | orchestrator | 2025-05-19 19:50:46.928926 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.928932 | orchestrator | Monday 19 May 2025 19:49:19 +0000 (0:00:00.515) 0:00:13.854 ************ 2025-05-19 19:50:46.928938 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928944 | orchestrator | 2025-05-19 19:50:46.928951 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.928957 | orchestrator | Monday 19 May 2025 19:49:19 +0000 (0:00:00.121) 0:00:13.975 ************ 2025-05-19 19:50:46.928963 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.928969 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.928975 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.928981 | orchestrator | 2025-05-19 19:50:46.928987 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.928993 | orchestrator | Monday 19 May 2025 19:49:19 +0000 (0:00:00.462) 0:00:14.438 ************ 2025-05-19 19:50:46.928999 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.929006 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.929012 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.929018 | orchestrator | 2025-05-19 19:50:46.929024 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.929030 | orchestrator | Monday 19 May 2025 19:49:20 +0000 (0:00:00.572) 0:00:15.010 ************ 2025-05-19 19:50:46.929036 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929042 | orchestrator | 2025-05-19 19:50:46.929049 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.929055 | orchestrator | Monday 19 May 2025 19:49:20 +0000 (0:00:00.128) 0:00:15.139 ************ 2025-05-19 19:50:46.929061 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929067 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929073 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929079 | orchestrator | 2025-05-19 19:50:46.929086 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 19:50:46.929106 | orchestrator | Monday 19 May 2025 19:49:21 +0000 (0:00:00.441) 0:00:15.581 ************ 2025-05-19 19:50:46.929112 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:50:46.929119 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:50:46.929125 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:50:46.929131 | orchestrator | 2025-05-19 19:50:46.929137 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 19:50:46.929143 | orchestrator | Monday 19 May 2025 19:49:21 +0000 (0:00:00.760) 0:00:16.342 ************ 2025-05-19 19:50:46.929149 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929155 | orchestrator | 2025-05-19 19:50:46.929162 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 19:50:46.929168 | orchestrator | Monday 19 May 2025 19:49:22 +0000 (0:00:00.134) 0:00:16.476 ************ 2025-05-19 19:50:46.929174 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929180 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929186 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929192 | orchestrator | 2025-05-19 19:50:46.929198 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-19 19:50:46.929205 | orchestrator | Monday 19 May 2025 19:49:22 +0000 (0:00:00.525) 0:00:17.002 ************ 2025-05-19 19:50:46.929211 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:50:46.929217 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:50:46.929227 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:50:46.929233 | orchestrator | 2025-05-19 19:50:46.929240 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-19 19:50:46.929246 | orchestrator | Monday 19 May 2025 19:49:25 +0000 (0:00:03.269) 0:00:20.272 ************ 2025-05-19 19:50:46.929252 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 19:50:46.929262 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 19:50:46.929269 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 19:50:46.929275 | orchestrator | 2025-05-19 19:50:46.929281 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-19 19:50:46.929287 | orchestrator | Monday 19 May 2025 19:49:28 +0000 (0:00:03.168) 0:00:23.440 ************ 2025-05-19 19:50:46.929294 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 19:50:46.929300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 19:50:46.929307 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 19:50:46.929313 | orchestrator | 2025-05-19 19:50:46.929319 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-19 19:50:46.929328 | orchestrator | Monday 19 May 2025 19:49:31 +0000 (0:00:02.697) 0:00:26.137 ************ 2025-05-19 19:50:46.929335 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 19:50:46.929341 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 19:50:46.929347 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 19:50:46.929353 | orchestrator | 2025-05-19 19:50:46.929359 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-19 19:50:46.929365 | orchestrator | Monday 19 May 2025 19:49:33 +0000 (0:00:01.988) 0:00:28.126 ************ 2025-05-19 19:50:46.929372 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929378 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929384 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929390 | orchestrator | 2025-05-19 19:50:46.929396 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-19 19:50:46.929402 | orchestrator | Monday 19 May 2025 19:49:34 +0000 (0:00:00.363) 0:00:28.490 ************ 2025-05-19 19:50:46.929408 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929414 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929421 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929427 | orchestrator | 2025-05-19 19:50:46.929433 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 19:50:46.929439 | orchestrator | Monday 19 May 2025 19:49:34 +0000 (0:00:00.401) 0:00:28.892 ************ 2025-05-19 19:50:46.929445 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:50:46.929452 | orchestrator | 2025-05-19 19:50:46.929458 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-19 19:50:46.929464 | orchestrator | Monday 19 May 2025 19:49:35 +0000 (0:00:00.670) 0:00:29.563 ************ 2025-05-19 19:50:46.929476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.929498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.929512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.929524 | orchestrator | 2025-05-19 19:50:46.929530 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-19 19:50:46.929536 | orchestrator | Monday 19 May 2025 19:49:37 +0000 (0:00:01.967) 0:00:31.530 ************ 2025-05-19 19:50:46.929547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:50:46.929559 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:50:46.929588 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:50:46.929615 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929627 | orchestrator | 2025-05-19 19:50:46.929643 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-19 19:50:46.929652 | orchestrator | Monday 19 May 2025 19:49:38 +0000 (0:00:00.996) 0:00:32.526 ************ 2025-05-19 19:50:46.929676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:50:46.929687 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:50:46.929715 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 19:50:46.929751 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929760 | orchestrator | 2025-05-19 19:50:46.929770 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-19 19:50:46.929779 | orchestrator | Monday 19 May 2025 19:49:39 +0000 (0:00:01.648) 0:00:34.175 ************ 2025-05-19 19:50:46.929801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.929819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.929836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 19:50:46.929848 | orchestrator | 2025-05-19 19:50:46.929858 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 19:50:46.929868 | orchestrator | Monday 19 May 2025 19:49:45 +0000 (0:00:05.403) 0:00:39.579 ************ 2025-05-19 19:50:46.929878 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:50:46.929888 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:50:46.929898 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:50:46.929907 | orchestrator | 2025-05-19 19:50:46.929917 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 19:50:46.929926 | orchestrator | Monday 19 May 2025 19:49:45 +0000 (0:00:00.322) 0:00:39.902 ************ 2025-05-19 19:50:46.929940 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:50:46.929951 | orchestrator | 2025-05-19 19:50:46.929962 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-19 19:50:46.929973 | orchestrator | Monday 19 May 2025 19:49:46 +0000 (0:00:00.619) 0:00:40.522 ************ 2025-05-19 19:50:46.929985 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:50:46.929996 | orchestrator | 2025-05-19 19:50:46.930009 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-19 19:50:46.930121 | orchestrator | Monday 19 May 2025 19:49:48 +0000 (0:00:02.488) 0:00:43.011 ************ 2025-05-19 19:50:46.930136 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:50:46.930147 | orchestrator | 2025-05-19 19:50:46.930169 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-19 19:50:46.930182 | orchestrator | Monday 19 May 2025 19:49:50 +0000 (0:00:02.405) 0:00:45.416 ************ 2025-05-19 19:50:46.930194 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:50:46.930205 | orchestrator | 2025-05-19 19:50:46.930217 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 19:50:46.930229 | orchestrator | Monday 19 May 2025 19:50:05 +0000 (0:00:14.970) 0:01:00.386 ************ 2025-05-19 19:50:46.930241 | orchestrator | 2025-05-19 19:50:46.930254 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 19:50:46.930266 | orchestrator | Monday 19 May 2025 19:50:05 +0000 (0:00:00.058) 0:01:00.445 ************ 2025-05-19 19:50:46.930277 | orchestrator | 2025-05-19 19:50:46.930289 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 19:50:46.930300 | orchestrator | Monday 19 May 2025 19:50:06 +0000 (0:00:00.239) 0:01:00.684 ************ 2025-05-19 19:50:46.930313 | orchestrator | 2025-05-19 19:50:46.930324 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-19 19:50:46.930337 | orchestrator | Monday 19 May 2025 19:50:06 +0000 (0:00:00.060) 0:01:00.745 ************ 2025-05-19 19:50:46.930348 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:50:46.930361 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:50:46.930371 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:50:46.930378 | orchestrator | 2025-05-19 19:50:46.930385 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:50:46.930394 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-19 19:50:46.930402 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 19:50:46.930410 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 19:50:46.930417 | orchestrator | 2025-05-19 19:50:46.930424 | orchestrator | 2025-05-19 19:50:46.930431 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:50:46.930438 | orchestrator | Monday 19 May 2025 19:50:44 +0000 (0:00:37.729) 0:01:38.474 ************ 2025-05-19 19:50:46.930445 | orchestrator | =============================================================================== 2025-05-19 19:50:46.930454 | orchestrator | horizon : Restart horizon container ------------------------------------ 37.73s 2025-05-19 19:50:46.930466 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.97s 2025-05-19 19:50:46.930477 | orchestrator | horizon : Deploy horizon container -------------------------------------- 5.40s 2025-05-19 19:50:46.930488 | orchestrator | horizon : Copying over config.json files for services ------------------- 3.27s 2025-05-19 19:50:46.930500 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 3.17s 2025-05-19 19:50:46.930512 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.70s 2025-05-19 19:50:46.930525 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.49s 2025-05-19 19:50:46.930538 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.41s 2025-05-19 19:50:46.930550 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.99s 2025-05-19 19:50:46.930561 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.97s 2025-05-19 19:50:46.930725 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.65s 2025-05-19 19:50:46.930738 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.65s 2025-05-19 19:50:46.930745 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.06s 2025-05-19 19:50:46.930763 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.00s 2025-05-19 19:50:46.930784 | orchestrator | horizon : Update policy file name --------------------------------------- 0.76s 2025-05-19 19:50:46.930792 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-05-19 19:50:46.930799 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2025-05-19 19:50:46.930806 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.64s 2025-05-19 19:50:46.930814 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-05-19 19:50:46.930821 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.61s 2025-05-19 19:50:46.930828 | orchestrator | 2025-05-19 19:50:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:46.932873 | orchestrator | 2025-05-19 19:50:46 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:46.935141 | orchestrator | 2025-05-19 19:50:46 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:46.935175 | orchestrator | 2025-05-19 19:50:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:49.991008 | orchestrator | 2025-05-19 19:50:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:49.992459 | orchestrator | 2025-05-19 19:50:49 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:49.994885 | orchestrator | 2025-05-19 19:50:49 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:49.994926 | orchestrator | 2025-05-19 19:50:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:53.048217 | orchestrator | 2025-05-19 19:50:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:53.049336 | orchestrator | 2025-05-19 19:50:53 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:53.050600 | orchestrator | 2025-05-19 19:50:53 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:53.050626 | orchestrator | 2025-05-19 19:50:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:56.100190 | orchestrator | 2025-05-19 19:50:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:56.101191 | orchestrator | 2025-05-19 19:50:56 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:56.102746 | orchestrator | 2025-05-19 19:50:56 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:56.102792 | orchestrator | 2025-05-19 19:50:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:50:59.158662 | orchestrator | 2025-05-19 19:50:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:50:59.160633 | orchestrator | 2025-05-19 19:50:59 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:50:59.163903 | orchestrator | 2025-05-19 19:50:59 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:50:59.163966 | orchestrator | 2025-05-19 19:50:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:02.212743 | orchestrator | 2025-05-19 19:51:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:02.214533 | orchestrator | 2025-05-19 19:51:02 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:02.216415 | orchestrator | 2025-05-19 19:51:02 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:02.216453 | orchestrator | 2025-05-19 19:51:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:05.269498 | orchestrator | 2025-05-19 19:51:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:05.272627 | orchestrator | 2025-05-19 19:51:05 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:05.273637 | orchestrator | 2025-05-19 19:51:05 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:05.273653 | orchestrator | 2025-05-19 19:51:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:08.325280 | orchestrator | 2025-05-19 19:51:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:08.326139 | orchestrator | 2025-05-19 19:51:08 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:08.328081 | orchestrator | 2025-05-19 19:51:08 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:08.328168 | orchestrator | 2025-05-19 19:51:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:11.370736 | orchestrator | 2025-05-19 19:51:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:11.372467 | orchestrator | 2025-05-19 19:51:11 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:11.374683 | orchestrator | 2025-05-19 19:51:11 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:11.374732 | orchestrator | 2025-05-19 19:51:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:14.422858 | orchestrator | 2025-05-19 19:51:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:14.425590 | orchestrator | 2025-05-19 19:51:14 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:14.427420 | orchestrator | 2025-05-19 19:51:14 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:14.427469 | orchestrator | 2025-05-19 19:51:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:17.493265 | orchestrator | 2025-05-19 19:51:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:17.495252 | orchestrator | 2025-05-19 19:51:17 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:17.496669 | orchestrator | 2025-05-19 19:51:17 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:17.497473 | orchestrator | 2025-05-19 19:51:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:20.546841 | orchestrator | 2025-05-19 19:51:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:20.548456 | orchestrator | 2025-05-19 19:51:20 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:20.550646 | orchestrator | 2025-05-19 19:51:20 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:20.550693 | orchestrator | 2025-05-19 19:51:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:23.601650 | orchestrator | 2025-05-19 19:51:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:23.603136 | orchestrator | 2025-05-19 19:51:23 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:23.605420 | orchestrator | 2025-05-19 19:51:23 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:23.605460 | orchestrator | 2025-05-19 19:51:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:26.652254 | orchestrator | 2025-05-19 19:51:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:26.652378 | orchestrator | 2025-05-19 19:51:26 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:26.652765 | orchestrator | 2025-05-19 19:51:26 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:26.652788 | orchestrator | 2025-05-19 19:51:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:29.701626 | orchestrator | 2025-05-19 19:51:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:29.704734 | orchestrator | 2025-05-19 19:51:29 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:29.707799 | orchestrator | 2025-05-19 19:51:29 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:29.707912 | orchestrator | 2025-05-19 19:51:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:32.755707 | orchestrator | 2025-05-19 19:51:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:32.756566 | orchestrator | 2025-05-19 19:51:32 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:32.758140 | orchestrator | 2025-05-19 19:51:32 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:32.758176 | orchestrator | 2025-05-19 19:51:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:35.812050 | orchestrator | 2025-05-19 19:51:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:35.813646 | orchestrator | 2025-05-19 19:51:35 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:35.815142 | orchestrator | 2025-05-19 19:51:35 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:35.815185 | orchestrator | 2025-05-19 19:51:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:38.861502 | orchestrator | 2025-05-19 19:51:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:38.863367 | orchestrator | 2025-05-19 19:51:38 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:38.865605 | orchestrator | 2025-05-19 19:51:38 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state STARTED 2025-05-19 19:51:38.865883 | orchestrator | 2025-05-19 19:51:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:41.920991 | orchestrator | 2025-05-19 19:51:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:41.921088 | orchestrator | 2025-05-19 19:51:41 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state STARTED 2025-05-19 19:51:41.922006 | orchestrator | 2025-05-19 19:51:41 | INFO  | Task 4fc0e296-fe12-467b-9bd2-6b79ce678db1 is in state SUCCESS 2025-05-19 19:51:41.923588 | orchestrator | 2025-05-19 19:51:41.923620 | orchestrator | 2025-05-19 19:51:41.923627 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:51:41.923635 | orchestrator | 2025-05-19 19:51:41.923641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:51:41.923648 | orchestrator | Monday 19 May 2025 19:49:05 +0000 (0:00:00.308) 0:00:00.308 ************ 2025-05-19 19:51:41.923654 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.923662 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:51:41.923668 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:51:41.923674 | orchestrator | 2025-05-19 19:51:41.923680 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:51:41.923686 | orchestrator | Monday 19 May 2025 19:49:06 +0000 (0:00:00.395) 0:00:00.703 ************ 2025-05-19 19:51:41.923693 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-19 19:51:41.923718 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-19 19:51:41.923724 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-19 19:51:41.923729 | orchestrator | 2025-05-19 19:51:41.923735 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-19 19:51:41.923741 | orchestrator | 2025-05-19 19:51:41.923746 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 19:51:41.923752 | orchestrator | Monday 19 May 2025 19:49:06 +0000 (0:00:00.294) 0:00:00.997 ************ 2025-05-19 19:51:41.923759 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:51:41.923765 | orchestrator | 2025-05-19 19:51:41.923771 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-19 19:51:41.923777 | orchestrator | Monday 19 May 2025 19:49:07 +0000 (0:00:00.783) 0:00:01.780 ************ 2025-05-19 19:51:41.923788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.923798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.923821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.923834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.923842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.923849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.923855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.923861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.923871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.923877 | orchestrator | 2025-05-19 19:51:41.923883 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-19 19:51:41.923901 | orchestrator | Monday 19 May 2025 19:49:09 +0000 (0:00:02.139) 0:00:03.919 ************ 2025-05-19 19:51:41.923907 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-19 19:51:41.923913 | orchestrator | 2025-05-19 19:51:41.923919 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-19 19:51:41.923925 | orchestrator | Monday 19 May 2025 19:49:10 +0000 (0:00:00.551) 0:00:04.471 ************ 2025-05-19 19:51:41.923951 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.923962 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:51:41.923970 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:51:41.923976 | orchestrator | 2025-05-19 19:51:41.923981 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-19 19:51:41.923987 | orchestrator | Monday 19 May 2025 19:49:10 +0000 (0:00:00.456) 0:00:04.928 ************ 2025-05-19 19:51:41.923993 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:51:41.923999 | orchestrator | 2025-05-19 19:51:41.924005 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 19:51:41.924011 | orchestrator | Monday 19 May 2025 19:49:10 +0000 (0:00:00.412) 0:00:05.341 ************ 2025-05-19 19:51:41.924016 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:51:41.924022 | orchestrator | 2025-05-19 19:51:41.924028 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-19 19:51:41.924033 | orchestrator | Monday 19 May 2025 19:49:11 +0000 (0:00:00.649) 0:00:05.990 ************ 2025-05-19 19:51:41.924040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924511 | orchestrator | 2025-05-19 19:51:41.924517 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-19 19:51:41.924529 | orchestrator | Monday 19 May 2025 19:49:15 +0000 (0:00:03.540) 0:00:09.530 ************ 2025-05-19 19:51:41.924542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:51:41.924549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:51:41.924561 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.924568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:51:41.924580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:51:41.924603 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.924609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:51:41.924616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:51:41.924632 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.924638 | orchestrator | 2025-05-19 19:51:41.924644 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-19 19:51:41.924650 | orchestrator | Monday 19 May 2025 19:49:15 +0000 (0:00:00.913) 0:00:10.444 ************ 2025-05-19 19:51:41.924659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:51:41.924670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:51:41.924682 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.924688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:51:41.924694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:51:41.924715 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.924730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-19 19:51:41.924736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 19:51:41.924748 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.924754 | orchestrator | 2025-05-19 19:51:41.924760 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-19 19:51:41.924766 | orchestrator | Monday 19 May 2025 19:49:17 +0000 (0:00:01.320) 0:00:11.764 ************ 2025-05-19 19:51:41.924772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924855 | orchestrator | 2025-05-19 19:51:41.924860 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-19 19:51:41.924866 | orchestrator | Monday 19 May 2025 19:49:21 +0000 (0:00:03.835) 0:00:15.599 ************ 2025-05-19 19:51:41.924872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.924915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.924921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.924986 | orchestrator | 2025-05-19 19:51:41.924991 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-19 19:51:41.924997 | orchestrator | Monday 19 May 2025 19:49:28 +0000 (0:00:07.590) 0:00:23.190 ************ 2025-05-19 19:51:41.925003 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.925009 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:51:41.925014 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:51:41.925020 | orchestrator | 2025-05-19 19:51:41.925026 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-19 19:51:41.925031 | orchestrator | Monday 19 May 2025 19:49:31 +0000 (0:00:02.494) 0:00:25.684 ************ 2025-05-19 19:51:41.925041 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925047 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925053 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925058 | orchestrator | 2025-05-19 19:51:41.925068 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-19 19:51:41.925074 | orchestrator | Monday 19 May 2025 19:49:32 +0000 (0:00:00.882) 0:00:26.567 ************ 2025-05-19 19:51:41.925079 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925085 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925091 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925096 | orchestrator | 2025-05-19 19:51:41.925102 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-19 19:51:41.925108 | orchestrator | Monday 19 May 2025 19:49:32 +0000 (0:00:00.526) 0:00:27.094 ************ 2025-05-19 19:51:41.925113 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925119 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925125 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925130 | orchestrator | 2025-05-19 19:51:41.925136 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-19 19:51:41.925142 | orchestrator | Monday 19 May 2025 19:49:32 +0000 (0:00:00.344) 0:00:27.438 ************ 2025-05-19 19:51:41.925148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.925159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.925165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.925175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.925185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.925196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 19:51:41.925202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925220 | orchestrator | 2025-05-19 19:51:41.925228 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 19:51:41.925237 | orchestrator | Monday 19 May 2025 19:49:35 +0000 (0:00:02.466) 0:00:29.905 ************ 2025-05-19 19:51:41.925247 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925256 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925265 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925274 | orchestrator | 2025-05-19 19:51:41.925283 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-19 19:51:41.925292 | orchestrator | Monday 19 May 2025 19:49:35 +0000 (0:00:00.557) 0:00:30.463 ************ 2025-05-19 19:51:41.925306 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 19:51:41.925316 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 19:51:41.925331 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 19:51:41.925341 | orchestrator | 2025-05-19 19:51:41.925351 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-19 19:51:41.925367 | orchestrator | Monday 19 May 2025 19:49:38 +0000 (0:00:02.240) 0:00:32.704 ************ 2025-05-19 19:51:41.925375 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:51:41.925381 | orchestrator | 2025-05-19 19:51:41.925386 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-19 19:51:41.925392 | orchestrator | Monday 19 May 2025 19:49:39 +0000 (0:00:00.864) 0:00:33.568 ************ 2025-05-19 19:51:41.925398 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925403 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925409 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925414 | orchestrator | 2025-05-19 19:51:41.925420 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-19 19:51:41.925426 | orchestrator | Monday 19 May 2025 19:49:40 +0000 (0:00:01.499) 0:00:35.068 ************ 2025-05-19 19:51:41.925431 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:51:41.925437 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 19:51:41.925443 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 19:51:41.925448 | orchestrator | 2025-05-19 19:51:41.925454 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-19 19:51:41.925460 | orchestrator | Monday 19 May 2025 19:49:41 +0000 (0:00:01.370) 0:00:36.439 ************ 2025-05-19 19:51:41.925465 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.925471 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:51:41.925476 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:51:41.925482 | orchestrator | 2025-05-19 19:51:41.925488 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-19 19:51:41.925493 | orchestrator | Monday 19 May 2025 19:49:42 +0000 (0:00:00.394) 0:00:36.833 ************ 2025-05-19 19:51:41.925499 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 19:51:41.925505 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 19:51:41.925510 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 19:51:41.925516 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 19:51:41.925522 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 19:51:41.925528 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 19:51:41.925533 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 19:51:41.925539 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 19:51:41.925545 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 19:51:41.925551 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 19:51:41.925556 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 19:51:41.925562 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 19:51:41.925568 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 19:51:41.925574 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 19:51:41.925579 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 19:51:41.925585 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 19:51:41.925591 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 19:51:41.925596 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 19:51:41.925608 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 19:51:41.925614 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 19:51:41.925619 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 19:51:41.925625 | orchestrator | 2025-05-19 19:51:41.925631 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-19 19:51:41.925636 | orchestrator | Monday 19 May 2025 19:49:53 +0000 (0:00:11.198) 0:00:48.032 ************ 2025-05-19 19:51:41.925642 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 19:51:41.925647 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 19:51:41.925653 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 19:51:41.925659 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 19:51:41.925668 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 19:51:41.925677 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 19:51:41.925683 | orchestrator | 2025-05-19 19:51:41.925688 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-19 19:51:41.925694 | orchestrator | Monday 19 May 2025 19:49:56 +0000 (0:00:03.359) 0:00:51.392 ************ 2025-05-19 19:51:41.925700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.925707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.925714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-19 19:51:41.925724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 19:51:41.925773 | orchestrator | 2025-05-19 19:51:41.925779 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 19:51:41.925785 | orchestrator | Monday 19 May 2025 19:50:00 +0000 (0:00:03.218) 0:00:54.610 ************ 2025-05-19 19:51:41.925791 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925796 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925802 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925807 | orchestrator | 2025-05-19 19:51:41.925813 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-19 19:51:41.925819 | orchestrator | Monday 19 May 2025 19:50:00 +0000 (0:00:00.296) 0:00:54.907 ************ 2025-05-19 19:51:41.925824 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.925830 | orchestrator | 2025-05-19 19:51:41.925836 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-19 19:51:41.925841 | orchestrator | Monday 19 May 2025 19:50:03 +0000 (0:00:02.851) 0:00:57.758 ************ 2025-05-19 19:51:41.925847 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.925852 | orchestrator | 2025-05-19 19:51:41.925858 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-19 19:51:41.925863 | orchestrator | Monday 19 May 2025 19:50:05 +0000 (0:00:02.409) 0:01:00.168 ************ 2025-05-19 19:51:41.925869 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.925875 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:51:41.925880 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:51:41.925886 | orchestrator | 2025-05-19 19:51:41.925891 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-19 19:51:41.925900 | orchestrator | Monday 19 May 2025 19:50:06 +0000 (0:00:01.045) 0:01:01.214 ************ 2025-05-19 19:51:41.925906 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.925914 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:51:41.925920 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:51:41.925925 | orchestrator | 2025-05-19 19:51:41.925949 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-19 19:51:41.925955 | orchestrator | Monday 19 May 2025 19:50:07 +0000 (0:00:00.477) 0:01:01.691 ************ 2025-05-19 19:51:41.925961 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.925967 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.925972 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.925978 | orchestrator | 2025-05-19 19:51:41.925984 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-19 19:51:41.925989 | orchestrator | Monday 19 May 2025 19:50:08 +0000 (0:00:00.932) 0:01:02.623 ************ 2025-05-19 19:51:41.925995 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.926000 | orchestrator | 2025-05-19 19:51:41.926006 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-19 19:51:41.926012 | orchestrator | Monday 19 May 2025 19:50:21 +0000 (0:00:13.842) 0:01:16.466 ************ 2025-05-19 19:51:41.926043 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.926049 | orchestrator | 2025-05-19 19:51:41.926054 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 19:51:41.926060 | orchestrator | Monday 19 May 2025 19:50:31 +0000 (0:00:09.859) 0:01:26.326 ************ 2025-05-19 19:51:41.926066 | orchestrator | 2025-05-19 19:51:41.926072 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 19:51:41.926077 | orchestrator | Monday 19 May 2025 19:50:31 +0000 (0:00:00.053) 0:01:26.380 ************ 2025-05-19 19:51:41.926088 | orchestrator | 2025-05-19 19:51:41.926094 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 19:51:41.926099 | orchestrator | Monday 19 May 2025 19:50:31 +0000 (0:00:00.052) 0:01:26.432 ************ 2025-05-19 19:51:41.926105 | orchestrator | 2025-05-19 19:51:41.926111 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-19 19:51:41.926116 | orchestrator | Monday 19 May 2025 19:50:32 +0000 (0:00:00.056) 0:01:26.489 ************ 2025-05-19 19:51:41.926122 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.926128 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:51:41.926133 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:51:41.926139 | orchestrator | 2025-05-19 19:51:41.926145 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-19 19:51:41.926150 | orchestrator | Monday 19 May 2025 19:50:41 +0000 (0:00:09.337) 0:01:35.826 ************ 2025-05-19 19:51:41.926156 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.926162 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:51:41.926167 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:51:41.926173 | orchestrator | 2025-05-19 19:51:41.926179 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-19 19:51:41.926184 | orchestrator | Monday 19 May 2025 19:50:46 +0000 (0:00:04.750) 0:01:40.577 ************ 2025-05-19 19:51:41.926190 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.926196 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:51:41.926201 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:51:41.926207 | orchestrator | 2025-05-19 19:51:41.926213 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 19:51:41.926218 | orchestrator | Monday 19 May 2025 19:50:56 +0000 (0:00:10.265) 0:01:50.842 ************ 2025-05-19 19:51:41.926224 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:51:41.926230 | orchestrator | 2025-05-19 19:51:41.926235 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-19 19:51:41.926241 | orchestrator | Monday 19 May 2025 19:50:57 +0000 (0:00:00.827) 0:01:51.669 ************ 2025-05-19 19:51:41.926247 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.926253 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:51:41.926258 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:51:41.926264 | orchestrator | 2025-05-19 19:51:41.926269 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-19 19:51:41.926275 | orchestrator | Monday 19 May 2025 19:50:58 +0000 (0:00:01.084) 0:01:52.753 ************ 2025-05-19 19:51:41.926281 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:51:41.926286 | orchestrator | 2025-05-19 19:51:41.926292 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-19 19:51:41.926298 | orchestrator | Monday 19 May 2025 19:50:59 +0000 (0:00:01.586) 0:01:54.340 ************ 2025-05-19 19:51:41.926304 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-19 19:51:41.926309 | orchestrator | 2025-05-19 19:51:41.926315 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-19 19:51:41.926321 | orchestrator | Monday 19 May 2025 19:51:10 +0000 (0:00:10.441) 0:02:04.782 ************ 2025-05-19 19:51:41.926327 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-19 19:51:41.926332 | orchestrator | 2025-05-19 19:51:41.926338 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-19 19:51:41.926347 | orchestrator | Monday 19 May 2025 19:51:29 +0000 (0:00:19.202) 0:02:23.984 ************ 2025-05-19 19:51:41.926357 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-19 19:51:41.926366 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-19 19:51:41.926375 | orchestrator | 2025-05-19 19:51:41.926385 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-19 19:51:41.926400 | orchestrator | Monday 19 May 2025 19:51:36 +0000 (0:00:07.132) 0:02:31.116 ************ 2025-05-19 19:51:41.926411 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.926421 | orchestrator | 2025-05-19 19:51:41.926430 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-19 19:51:41.926440 | orchestrator | Monday 19 May 2025 19:51:36 +0000 (0:00:00.126) 0:02:31.243 ************ 2025-05-19 19:51:41.926451 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.926457 | orchestrator | 2025-05-19 19:51:41.926462 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-19 19:51:41.926473 | orchestrator | Monday 19 May 2025 19:51:36 +0000 (0:00:00.138) 0:02:31.382 ************ 2025-05-19 19:51:41.926479 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.926485 | orchestrator | 2025-05-19 19:51:41.926490 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-19 19:51:41.926496 | orchestrator | Monday 19 May 2025 19:51:37 +0000 (0:00:00.116) 0:02:31.498 ************ 2025-05-19 19:51:41.926502 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.926507 | orchestrator | 2025-05-19 19:51:41.926513 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-19 19:51:41.926519 | orchestrator | Monday 19 May 2025 19:51:37 +0000 (0:00:00.421) 0:02:31.920 ************ 2025-05-19 19:51:41.926524 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:51:41.926530 | orchestrator | 2025-05-19 19:51:41.926535 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 19:51:41.926541 | orchestrator | Monday 19 May 2025 19:51:40 +0000 (0:00:03.423) 0:02:35.344 ************ 2025-05-19 19:51:41.926547 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:51:41.926552 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:51:41.926558 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:51:41.926564 | orchestrator | 2025-05-19 19:51:41.926569 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:51:41.926575 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-19 19:51:41.926583 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 19:51:41.926589 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 19:51:41.926595 | orchestrator | 2025-05-19 19:51:41.926600 | orchestrator | 2025-05-19 19:51:41.926606 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:51:41.926612 | orchestrator | Monday 19 May 2025 19:51:41 +0000 (0:00:00.555) 0:02:35.899 ************ 2025-05-19 19:51:41.926617 | orchestrator | =============================================================================== 2025-05-19 19:51:41.926623 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.20s 2025-05-19 19:51:41.926629 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.84s 2025-05-19 19:51:41.926634 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 11.20s 2025-05-19 19:51:41.926640 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.44s 2025-05-19 19:51:41.926645 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.27s 2025-05-19 19:51:41.926651 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.86s 2025-05-19 19:51:41.926657 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 9.34s 2025-05-19 19:51:41.926662 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.59s 2025-05-19 19:51:41.926668 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.13s 2025-05-19 19:51:41.926673 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.75s 2025-05-19 19:51:41.926684 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.84s 2025-05-19 19:51:41.926689 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.54s 2025-05-19 19:51:41.926695 | orchestrator | keystone : Creating default user role ----------------------------------- 3.42s 2025-05-19 19:51:41.926701 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.36s 2025-05-19 19:51:41.926706 | orchestrator | keystone : Check keystone containers ------------------------------------ 3.22s 2025-05-19 19:51:41.926712 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.85s 2025-05-19 19:51:41.926717 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.49s 2025-05-19 19:51:41.926723 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.47s 2025-05-19 19:51:41.926728 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.41s 2025-05-19 19:51:41.926734 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.24s 2025-05-19 19:51:41.926740 | orchestrator | 2025-05-19 19:51:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:44.973431 | orchestrator | 2025-05-19 19:51:44 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:51:44.973547 | orchestrator | 2025-05-19 19:51:44 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:51:44.974119 | orchestrator | 2025-05-19 19:51:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:44.975559 | orchestrator | 2025-05-19 19:51:44 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:51:44.976658 | orchestrator | 2025-05-19 19:51:44 | INFO  | Task 63cad8ac-5d3d-45a5-887f-fcf7038e2f02 is in state SUCCESS 2025-05-19 19:51:44.978338 | orchestrator | 2025-05-19 19:51:44.978478 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-19 19:51:44.978836 | orchestrator | 2025-05-19 19:51:44.978851 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-19 19:51:44.978862 | orchestrator | 2025-05-19 19:51:44.978874 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-19 19:51:44.978885 | orchestrator | Monday 19 May 2025 19:49:33 +0000 (0:00:01.090) 0:00:01.090 ************ 2025-05-19 19:51:44.978897 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:51:44.978910 | orchestrator | 2025-05-19 19:51:44.978964 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-19 19:51:44.978985 | orchestrator | Monday 19 May 2025 19:49:33 +0000 (0:00:00.479) 0:00:01.569 ************ 2025-05-19 19:51:44.979005 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 19:51:44.979024 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 19:51:44.979041 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 19:51:44.979055 | orchestrator | 2025-05-19 19:51:44.979089 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-19 19:51:44.979100 | orchestrator | Monday 19 May 2025 19:49:34 +0000 (0:00:00.721) 0:00:02.291 ************ 2025-05-19 19:51:44.979112 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:51:44.979123 | orchestrator | 2025-05-19 19:51:44.979134 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-19 19:51:44.979146 | orchestrator | Monday 19 May 2025 19:49:35 +0000 (0:00:00.784) 0:00:03.075 ************ 2025-05-19 19:51:44.979157 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979169 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979180 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979191 | orchestrator | 2025-05-19 19:51:44.979202 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-19 19:51:44.979235 | orchestrator | Monday 19 May 2025 19:49:35 +0000 (0:00:00.806) 0:00:03.882 ************ 2025-05-19 19:51:44.979246 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979257 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979267 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979278 | orchestrator | 2025-05-19 19:51:44.979289 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-19 19:51:44.979299 | orchestrator | Monday 19 May 2025 19:49:36 +0000 (0:00:00.300) 0:00:04.182 ************ 2025-05-19 19:51:44.979310 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979321 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979331 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979342 | orchestrator | 2025-05-19 19:51:44.979353 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-19 19:51:44.979364 | orchestrator | Monday 19 May 2025 19:49:37 +0000 (0:00:00.981) 0:00:05.164 ************ 2025-05-19 19:51:44.979374 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979385 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979396 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979406 | orchestrator | 2025-05-19 19:51:44.979417 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-19 19:51:44.979429 | orchestrator | Monday 19 May 2025 19:49:37 +0000 (0:00:00.348) 0:00:05.512 ************ 2025-05-19 19:51:44.979441 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979453 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979464 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979476 | orchestrator | 2025-05-19 19:51:44.979488 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-19 19:51:44.979500 | orchestrator | Monday 19 May 2025 19:49:37 +0000 (0:00:00.354) 0:00:05.866 ************ 2025-05-19 19:51:44.979512 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979525 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979537 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979548 | orchestrator | 2025-05-19 19:51:44.979561 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 19:51:44.979573 | orchestrator | Monday 19 May 2025 19:49:38 +0000 (0:00:00.332) 0:00:06.199 ************ 2025-05-19 19:51:44.979586 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.979599 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.979611 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.979622 | orchestrator | 2025-05-19 19:51:44.979635 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 19:51:44.979651 | orchestrator | Monday 19 May 2025 19:49:38 +0000 (0:00:00.552) 0:00:06.752 ************ 2025-05-19 19:51:44.979669 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979687 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979705 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979722 | orchestrator | 2025-05-19 19:51:44.979739 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 19:51:44.979757 | orchestrator | Monday 19 May 2025 19:49:39 +0000 (0:00:00.338) 0:00:07.091 ************ 2025-05-19 19:51:44.979775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 19:51:44.979792 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:51:44.979809 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:51:44.979829 | orchestrator | 2025-05-19 19:51:44.979847 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-19 19:51:44.979864 | orchestrator | Monday 19 May 2025 19:49:39 +0000 (0:00:00.779) 0:00:07.870 ************ 2025-05-19 19:51:44.979880 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.979891 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.979901 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.979912 | orchestrator | 2025-05-19 19:51:44.979966 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-19 19:51:44.980001 | orchestrator | Monday 19 May 2025 19:49:40 +0000 (0:00:00.525) 0:00:08.396 ************ 2025-05-19 19:51:44.980034 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 19:51:44.980063 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:51:44.980083 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:51:44.980102 | orchestrator | 2025-05-19 19:51:44.980120 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-19 19:51:44.980139 | orchestrator | Monday 19 May 2025 19:49:42 +0000 (0:00:02.369) 0:00:10.766 ************ 2025-05-19 19:51:44.980156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:51:44.980173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:51:44.980184 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:51:44.980195 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.980205 | orchestrator | 2025-05-19 19:51:44.980216 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-19 19:51:44.980226 | orchestrator | Monday 19 May 2025 19:49:43 +0000 (0:00:00.319) 0:00:11.086 ************ 2025-05-19 19:51:44.980240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 19:51:44.980255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 19:51:44.980272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 19:51:44.980290 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.980308 | orchestrator | 2025-05-19 19:51:44.980326 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-19 19:51:44.980344 | orchestrator | Monday 19 May 2025 19:49:43 +0000 (0:00:00.529) 0:00:11.615 ************ 2025-05-19 19:51:44.980366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:51:44.980388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:51:44.980407 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:51:44.980427 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.980445 | orchestrator | 2025-05-19 19:51:44.980463 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-19 19:51:44.980495 | orchestrator | Monday 19 May 2025 19:49:43 +0000 (0:00:00.144) 0:00:11.759 ************ 2025-05-19 19:51:44.980518 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '443ba7712a38', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 19:49:41.349305', 'end': '2025-05-19 19:49:41.388598', 'delta': '0:00:00.039293', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['443ba7712a38'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-19 19:51:44.980563 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '006e5e5b90be', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 19:49:42.022546', 'end': '2025-05-19 19:49:42.078890', 'delta': '0:00:00.056344', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['006e5e5b90be'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-19 19:51:44.980576 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': 'bb94dedf38a2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 19:49:42.563977', 'end': '2025-05-19 19:49:42.601248', 'delta': '0:00:00.037271', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bb94dedf38a2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-19 19:51:44.980588 | orchestrator | 2025-05-19 19:51:44.980599 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-19 19:51:44.980610 | orchestrator | Monday 19 May 2025 19:49:44 +0000 (0:00:00.184) 0:00:11.944 ************ 2025-05-19 19:51:44.980621 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.980635 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.980653 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.980671 | orchestrator | 2025-05-19 19:51:44.980689 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-19 19:51:44.980707 | orchestrator | Monday 19 May 2025 19:49:44 +0000 (0:00:00.418) 0:00:12.362 ************ 2025-05-19 19:51:44.980725 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-19 19:51:44.980744 | orchestrator | 2025-05-19 19:51:44.980763 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-19 19:51:44.980782 | orchestrator | Monday 19 May 2025 19:49:45 +0000 (0:00:01.267) 0:00:13.630 ************ 2025-05-19 19:51:44.980799 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.980818 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.980836 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.980854 | orchestrator | 2025-05-19 19:51:44.980873 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-19 19:51:44.980890 | orchestrator | Monday 19 May 2025 19:49:46 +0000 (0:00:00.421) 0:00:14.051 ************ 2025-05-19 19:51:44.980908 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.980997 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981019 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981036 | orchestrator | 2025-05-19 19:51:44.981068 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-19 19:51:44.981088 | orchestrator | Monday 19 May 2025 19:49:46 +0000 (0:00:00.332) 0:00:14.384 ************ 2025-05-19 19:51:44.981105 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981124 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981143 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981161 | orchestrator | 2025-05-19 19:51:44.981180 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-19 19:51:44.981199 | orchestrator | Monday 19 May 2025 19:49:46 +0000 (0:00:00.286) 0:00:14.670 ************ 2025-05-19 19:51:44.981216 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.981235 | orchestrator | 2025-05-19 19:51:44.981253 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-19 19:51:44.981270 | orchestrator | Monday 19 May 2025 19:49:46 +0000 (0:00:00.097) 0:00:14.768 ************ 2025-05-19 19:51:44.981289 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981307 | orchestrator | 2025-05-19 19:51:44.981325 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-19 19:51:44.981343 | orchestrator | Monday 19 May 2025 19:49:47 +0000 (0:00:00.223) 0:00:14.991 ************ 2025-05-19 19:51:44.981362 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981381 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981398 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981418 | orchestrator | 2025-05-19 19:51:44.981429 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-19 19:51:44.981440 | orchestrator | Monday 19 May 2025 19:49:47 +0000 (0:00:00.420) 0:00:15.412 ************ 2025-05-19 19:51:44.981450 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981461 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981471 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981482 | orchestrator | 2025-05-19 19:51:44.981493 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-19 19:51:44.981503 | orchestrator | Monday 19 May 2025 19:49:47 +0000 (0:00:00.422) 0:00:15.834 ************ 2025-05-19 19:51:44.981514 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981525 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981535 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981546 | orchestrator | 2025-05-19 19:51:44.981556 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-19 19:51:44.981566 | orchestrator | Monday 19 May 2025 19:49:48 +0000 (0:00:00.370) 0:00:16.204 ************ 2025-05-19 19:51:44.981576 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981586 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981604 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981614 | orchestrator | 2025-05-19 19:51:44.981633 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 19:51:44.981649 | orchestrator | Monday 19 May 2025 19:49:48 +0000 (0:00:00.288) 0:00:16.492 ************ 2025-05-19 19:51:44.981665 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981680 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981696 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981712 | orchestrator | 2025-05-19 19:51:44.981728 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-19 19:51:44.981744 | orchestrator | Monday 19 May 2025 19:49:49 +0000 (0:00:00.445) 0:00:16.938 ************ 2025-05-19 19:51:44.981761 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981774 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981784 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981798 | orchestrator | 2025-05-19 19:51:44.981813 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 19:51:44.981829 | orchestrator | Monday 19 May 2025 19:49:49 +0000 (0:00:00.382) 0:00:17.320 ************ 2025-05-19 19:51:44.981846 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.981863 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.981892 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.981909 | orchestrator | 2025-05-19 19:51:44.981950 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 19:51:44.981969 | orchestrator | Monday 19 May 2025 19:49:49 +0000 (0:00:00.336) 0:00:17.657 ************ 2025-05-19 19:51:44.981987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6eb1ee5c--85e6--559d--849b--4772bddae6d6-osd--block--6eb1ee5c--85e6--559d--849b--4772bddae6d6', 'dm-uuid-LVM-9G3JKyLpm2eulDt8H5JQ8xao4AIZs96Pqgn8fmPhBPH2BnEBaDLaLEHZ1LYWdS5n'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--702b6aa6--b3de--5669--bdb1--4e94528c6268-osd--block--702b6aa6--b3de--5669--bdb1--4e94528c6268', 'dm-uuid-LVM-EcY4Icp9gMytsTnktMDhXTZHo5reP7AzSvssmKtQTAOVlNL0xjz3tqc7e35Z2eDI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e-osd--block--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e', 'dm-uuid-LVM-9ydvbIPK60Ubi030XTiKL0ZpcekX4L92BOloCJEVVe3W1zlEjNY4qiOclzcnNn3I'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5fdf60fa--c839--55c0--9693--b393079e2a5b-osd--block--5fdf60fa--c839--55c0--9693--b393079e2a5b', 'dm-uuid-LVM-2s5sYhJPDuTuBARqueuiVIR5q5p11BvL8L7pyJcFEzFQZ45bpvYD50eX9rJ3h4d0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part1', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part14', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part15', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part16', 'scsi-SQEMU_QEMU_HARDDISK_343e5b57-eba5-4b83-86e1-b9250508edd4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6eb1ee5c--85e6--559d--849b--4772bddae6d6-osd--block--6eb1ee5c--85e6--559d--849b--4772bddae6d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g9Jdtg-WbpU-nvHv-zqRJ-MtkK-jX13-As6jCf', 'scsi-0QEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199', 'scsi-SQEMU_QEMU_HARDDISK_4a1dc982-c7ec-4970-a1b2-e96be6dbc199'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--702b6aa6--b3de--5669--bdb1--4e94528c6268-osd--block--702b6aa6--b3de--5669--bdb1--4e94528c6268'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Z15ey2-n9T6-dJCv-uCj8-53Ow-Gryt-OnjaxQ', 'scsi-0QEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2', 'scsi-SQEMU_QEMU_HARDDISK_ccb5460a-d35b-438c-9adb-1ec03f5b0ca2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53', 'scsi-SQEMU_QEMU_HARDDISK_d327778e-2231-4334-9e4b-af08a803eb53'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-50-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part1', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part14', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part15', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part16', 'scsi-SQEMU_QEMU_HARDDISK_bd4a323c-070b-40ce-9313-87b44bb33677-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982596 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.982612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f4656c6e--aa1c--5ab7--9900--7160e6354d4d-osd--block--f4656c6e--aa1c--5ab7--9900--7160e6354d4d', 'dm-uuid-LVM-ZcvUkVNDJQ8ioS2hHP5OAdcnSwBf0wOOQbTLSttY0OOy3lysEgjBR6Ap5RTZT3jN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e-osd--block--54ed6fee--c89e--5ff4--bbfb--dc8e4c8c481e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GjEcbQ-8XTh-OJxP-eit7-pJLB-al7v-dp9LyB', 'scsi-0QEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3', 'scsi-SQEMU_QEMU_HARDDISK_69146676-2ac4-45fa-96a7-ebd6f82ff2f3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5646b4ad--081a--5fe7--ab17--c0ecc5756623-osd--block--5646b4ad--081a--5fe7--ab17--c0ecc5756623', 'dm-uuid-LVM-I8PlVyy6GMjqNKUv5gU6nLfO3zQU2H4gcMOpber6zQpK3AHN5ZVXQcDe15mSTIUk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5fdf60fa--c839--55c0--9693--b393079e2a5b-osd--block--5fdf60fa--c839--55c0--9693--b393079e2a5b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-klAHA5-Nodk-mQ8E-7ylE-uJlv-uJ8W-sIEmp5', 'scsi-0QEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3', 'scsi-SQEMU_QEMU_HARDDISK_75dd3d3f-610d-4410-ad7d-41af206bb5b3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d', 'scsi-SQEMU_QEMU_HARDDISK_f14fc737-7fc7-4300-a12c-0d45556a294d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982715 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.982725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982765 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:51:44.982810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part1', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part14', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part15', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part16', 'scsi-SQEMU_QEMU_HARDDISK_49c2c95e-ca71-42b4-aa69-7630ee3c63b4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f4656c6e--aa1c--5ab7--9900--7160e6354d4d-osd--block--f4656c6e--aa1c--5ab7--9900--7160e6354d4d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hkxRGc-a5Nm-QuIu-VeIy-flWs-37nk-FOX5q3', 'scsi-0QEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091', 'scsi-SQEMU_QEMU_HARDDISK_cc8857f4-0920-4071-aa29-561fcd5ac091'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5646b4ad--081a--5fe7--ab17--c0ecc5756623-osd--block--5646b4ad--081a--5fe7--ab17--c0ecc5756623'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qScHiP-CxQ1-lQnO-87g3-L2C7-u0fI-W981ja', 'scsi-0QEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20', 'scsi-SQEMU_QEMU_HARDDISK_61384220-7968-49f8-abf1-ef218bf9da20'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be', 'scsi-SQEMU_QEMU_HARDDISK_cefbdaf0-1f4e-46ad-9d0a-02354cb171be'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:51:44.982882 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.982892 | orchestrator | 2025-05-19 19:51:44.982902 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-19 19:51:44.982912 | orchestrator | Monday 19 May 2025 19:49:50 +0000 (0:00:00.701) 0:00:18.359 ************ 2025-05-19 19:51:44.982943 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-19 19:51:44.982954 | orchestrator | 2025-05-19 19:51:44.982964 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-19 19:51:44.982973 | orchestrator | Monday 19 May 2025 19:49:51 +0000 (0:00:01.479) 0:00:19.838 ************ 2025-05-19 19:51:44.982983 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.982992 | orchestrator | 2025-05-19 19:51:44.983002 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-19 19:51:44.983011 | orchestrator | Monday 19 May 2025 19:49:52 +0000 (0:00:00.185) 0:00:20.024 ************ 2025-05-19 19:51:44.983021 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.983031 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.983040 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.983050 | orchestrator | 2025-05-19 19:51:44.983059 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-19 19:51:44.983069 | orchestrator | Monday 19 May 2025 19:49:52 +0000 (0:00:00.458) 0:00:20.482 ************ 2025-05-19 19:51:44.983079 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.983088 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.983098 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.983112 | orchestrator | 2025-05-19 19:51:44.983129 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-19 19:51:44.983146 | orchestrator | Monday 19 May 2025 19:49:53 +0000 (0:00:00.742) 0:00:21.225 ************ 2025-05-19 19:51:44.983163 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.983180 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.983195 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.983212 | orchestrator | 2025-05-19 19:51:44.983228 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-19 19:51:44.983246 | orchestrator | Monday 19 May 2025 19:49:53 +0000 (0:00:00.301) 0:00:21.527 ************ 2025-05-19 19:51:44.983262 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.983279 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.983297 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.983314 | orchestrator | 2025-05-19 19:51:44.983333 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-19 19:51:44.983352 | orchestrator | Monday 19 May 2025 19:49:54 +0000 (0:00:00.980) 0:00:22.508 ************ 2025-05-19 19:51:44.983362 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.983371 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.983381 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.983390 | orchestrator | 2025-05-19 19:51:44.983400 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-19 19:51:44.983409 | orchestrator | Monday 19 May 2025 19:49:54 +0000 (0:00:00.308) 0:00:22.816 ************ 2025-05-19 19:51:44.983418 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.983428 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.983437 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.983446 | orchestrator | 2025-05-19 19:51:44.983456 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-19 19:51:44.983465 | orchestrator | Monday 19 May 2025 19:49:55 +0000 (0:00:00.476) 0:00:23.293 ************ 2025-05-19 19:51:44.983474 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.983484 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.983493 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.983502 | orchestrator | 2025-05-19 19:51:44.983512 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-19 19:51:44.983521 | orchestrator | Monday 19 May 2025 19:49:55 +0000 (0:00:00.381) 0:00:23.675 ************ 2025-05-19 19:51:44.983531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:51:44.983540 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:51:44.983549 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:51:44.983559 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:51:44.983568 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.983578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:51:44.983587 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:51:44.983596 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:51:44.983605 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:51:44.983615 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.983624 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:51:44.983634 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.983643 | orchestrator | 2025-05-19 19:51:44.983653 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-19 19:51:44.983670 | orchestrator | Monday 19 May 2025 19:49:56 +0000 (0:00:01.051) 0:00:24.726 ************ 2025-05-19 19:51:44.983685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:51:44.983695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:51:44.983705 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:51:44.983714 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:51:44.983723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:51:44.983733 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.983742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:51:44.983752 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:51:44.983761 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:51:44.983771 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.983780 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:51:44.983789 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.983799 | orchestrator | 2025-05-19 19:51:44.983809 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-19 19:51:44.983818 | orchestrator | Monday 19 May 2025 19:49:57 +0000 (0:00:00.785) 0:00:25.512 ************ 2025-05-19 19:51:44.983828 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 19:51:44.983844 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-19 19:51:44.983854 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-19 19:51:44.983863 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 19:51:44.983873 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-19 19:51:44.983882 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-19 19:51:44.983891 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 19:51:44.983901 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-19 19:51:44.983910 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-19 19:51:44.983920 | orchestrator | 2025-05-19 19:51:44.983985 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-19 19:51:44.983995 | orchestrator | Monday 19 May 2025 19:49:59 +0000 (0:00:01.471) 0:00:26.983 ************ 2025-05-19 19:51:44.984004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:51:44.984014 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:51:44.984023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:51:44.984033 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984042 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:51:44.984052 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:51:44.984061 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:51:44.984070 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984080 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:51:44.984089 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:51:44.984099 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:51:44.984108 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984117 | orchestrator | 2025-05-19 19:51:44.984127 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-19 19:51:44.984136 | orchestrator | Monday 19 May 2025 19:49:59 +0000 (0:00:00.602) 0:00:27.585 ************ 2025-05-19 19:51:44.984146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 19:51:44.984155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 19:51:44.984165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 19:51:44.984174 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 19:51:44.984183 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 19:51:44.984193 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984202 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 19:51:44.984212 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984221 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 19:51:44.984231 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 19:51:44.984240 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 19:51:44.984249 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984259 | orchestrator | 2025-05-19 19:51:44.984268 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-19 19:51:44.984278 | orchestrator | Monday 19 May 2025 19:50:00 +0000 (0:00:00.445) 0:00:28.031 ************ 2025-05-19 19:51:44.984287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:51:44.984298 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:51:44.984307 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:51:44.984316 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984326 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:51:44.984344 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:51:44.984354 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:51:44.984364 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984373 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-19 19:51:44.984395 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:51:44.984405 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:51:44.984414 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984424 | orchestrator | 2025-05-19 19:51:44.984433 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-19 19:51:44.984443 | orchestrator | Monday 19 May 2025 19:50:00 +0000 (0:00:00.402) 0:00:28.433 ************ 2025-05-19 19:51:44.984452 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:51:44.984462 | orchestrator | 2025-05-19 19:51:44.984472 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 19:51:44.984482 | orchestrator | Monday 19 May 2025 19:50:01 +0000 (0:00:00.758) 0:00:29.191 ************ 2025-05-19 19:51:44.984492 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984501 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984509 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984517 | orchestrator | 2025-05-19 19:51:44.984525 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 19:51:44.984533 | orchestrator | Monday 19 May 2025 19:50:01 +0000 (0:00:00.337) 0:00:29.529 ************ 2025-05-19 19:51:44.984540 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984548 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984556 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984563 | orchestrator | 2025-05-19 19:51:44.984571 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 19:51:44.984579 | orchestrator | Monday 19 May 2025 19:50:01 +0000 (0:00:00.322) 0:00:29.852 ************ 2025-05-19 19:51:44.984587 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984594 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984602 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984610 | orchestrator | 2025-05-19 19:51:44.984617 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 19:51:44.984625 | orchestrator | Monday 19 May 2025 19:50:02 +0000 (0:00:00.340) 0:00:30.193 ************ 2025-05-19 19:51:44.984633 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.984641 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.984648 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.984656 | orchestrator | 2025-05-19 19:51:44.984664 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-19 19:51:44.984672 | orchestrator | Monday 19 May 2025 19:50:02 +0000 (0:00:00.667) 0:00:30.860 ************ 2025-05-19 19:51:44.984680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:51:44.984687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:51:44.984695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:51:44.984703 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984710 | orchestrator | 2025-05-19 19:51:44.984718 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 19:51:44.984726 | orchestrator | Monday 19 May 2025 19:50:03 +0000 (0:00:00.412) 0:00:31.272 ************ 2025-05-19 19:51:44.984734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:51:44.984741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:51:44.984754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:51:44.984762 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984770 | orchestrator | 2025-05-19 19:51:44.984778 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 19:51:44.984785 | orchestrator | Monday 19 May 2025 19:50:03 +0000 (0:00:00.417) 0:00:31.690 ************ 2025-05-19 19:51:44.984793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:51:44.984801 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:51:44.984808 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:51:44.984816 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984824 | orchestrator | 2025-05-19 19:51:44.984831 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:51:44.984839 | orchestrator | Monday 19 May 2025 19:50:04 +0000 (0:00:00.421) 0:00:32.112 ************ 2025-05-19 19:51:44.984847 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:51:44.984855 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:51:44.984863 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:51:44.984870 | orchestrator | 2025-05-19 19:51:44.984878 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-19 19:51:44.984886 | orchestrator | Monday 19 May 2025 19:50:04 +0000 (0:00:00.327) 0:00:32.440 ************ 2025-05-19 19:51:44.984894 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 19:51:44.984901 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 19:51:44.984909 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 19:51:44.984917 | orchestrator | 2025-05-19 19:51:44.984939 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-19 19:51:44.984947 | orchestrator | Monday 19 May 2025 19:50:05 +0000 (0:00:00.533) 0:00:32.974 ************ 2025-05-19 19:51:44.984955 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.984962 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.984970 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.984978 | orchestrator | 2025-05-19 19:51:44.984985 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-19 19:51:44.984993 | orchestrator | Monday 19 May 2025 19:50:05 +0000 (0:00:00.549) 0:00:33.523 ************ 2025-05-19 19:51:44.985001 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.985009 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.985016 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.985024 | orchestrator | 2025-05-19 19:51:44.985032 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-19 19:51:44.985044 | orchestrator | Monday 19 May 2025 19:50:05 +0000 (0:00:00.366) 0:00:33.890 ************ 2025-05-19 19:51:44.985052 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-19 19:51:44.985064 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.985072 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-19 19:51:44.985080 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.985087 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-19 19:51:44.985095 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.985103 | orchestrator | 2025-05-19 19:51:44.985110 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-19 19:51:44.985118 | orchestrator | Monday 19 May 2025 19:50:06 +0000 (0:00:00.443) 0:00:34.333 ************ 2025-05-19 19:51:44.985126 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-19 19:51:44.985134 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.985142 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-19 19:51:44.985150 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.985157 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-19 19:51:44.985171 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.985179 | orchestrator | 2025-05-19 19:51:44.985186 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-19 19:51:44.985194 | orchestrator | Monday 19 May 2025 19:50:06 +0000 (0:00:00.369) 0:00:34.703 ************ 2025-05-19 19:51:44.985202 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 19:51:44.985210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 19:51:44.985217 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 19:51:44.985225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 19:51:44.985233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 19:51:44.985241 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.985248 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 19:51:44.985256 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 19:51:44.985264 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 19:51:44.985271 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.985279 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 19:51:44.985287 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.985294 | orchestrator | 2025-05-19 19:51:44.985302 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-19 19:51:44.985310 | orchestrator | Monday 19 May 2025 19:50:07 +0000 (0:00:01.109) 0:00:35.812 ************ 2025-05-19 19:51:44.985318 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.985326 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.985333 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:51:44.985341 | orchestrator | 2025-05-19 19:51:44.985349 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-19 19:51:44.985356 | orchestrator | Monday 19 May 2025 19:50:08 +0000 (0:00:00.320) 0:00:36.133 ************ 2025-05-19 19:51:44.985364 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 19:51:44.985372 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:51:44.985380 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:51:44.985388 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-19 19:51:44.985395 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 19:51:44.985403 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 19:51:44.985411 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 19:51:44.985419 | orchestrator | 2025-05-19 19:51:44.985426 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-19 19:51:44.985434 | orchestrator | Monday 19 May 2025 19:50:09 +0000 (0:00:01.135) 0:00:37.269 ************ 2025-05-19 19:51:44.985442 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 19:51:44.985450 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:51:44.985457 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:51:44.985465 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-19 19:51:44.985473 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 19:51:44.985481 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 19:51:44.985488 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 19:51:44.985496 | orchestrator | 2025-05-19 19:51:44.985504 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-19 19:51:44.985517 | orchestrator | Monday 19 May 2025 19:50:11 +0000 (0:00:02.057) 0:00:39.327 ************ 2025-05-19 19:51:44.985525 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:51:44.985532 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:51:44.985540 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-19 19:51:44.985548 | orchestrator | 2025-05-19 19:51:44.985555 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-19 19:51:44.985567 | orchestrator | Monday 19 May 2025 19:50:11 +0000 (0:00:00.574) 0:00:39.902 ************ 2025-05-19 19:51:44.985581 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:51:44.985591 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:51:44.985600 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:51:44.985608 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:51:44.985616 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 19:51:44.985624 | orchestrator | 2025-05-19 19:51:44.985632 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-19 19:51:44.985640 | orchestrator | Monday 19 May 2025 19:50:54 +0000 (0:00:42.642) 0:01:22.544 ************ 2025-05-19 19:51:44.985648 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985655 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985663 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985671 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985678 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985686 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985694 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-19 19:51:44.985701 | orchestrator | 2025-05-19 19:51:44.985709 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-19 19:51:44.985717 | orchestrator | Monday 19 May 2025 19:51:15 +0000 (0:00:20.671) 0:01:43.216 ************ 2025-05-19 19:51:44.985725 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985732 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985740 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985748 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985755 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985768 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985776 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 19:51:44.985783 | orchestrator | 2025-05-19 19:51:44.985791 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-19 19:51:44.985799 | orchestrator | Monday 19 May 2025 19:51:25 +0000 (0:00:10.166) 0:01:53.382 ************ 2025-05-19 19:51:44.985806 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985814 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 19:51:44.985822 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 19:51:44.985829 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985837 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 19:51:44.985845 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 19:51:44.985853 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985860 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 19:51:44.985868 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 19:51:44.985876 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985883 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 19:51:44.985896 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 19:51:44.985908 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985916 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 19:51:44.985939 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 19:51:44.985947 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 19:51:44.985954 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 19:51:44.985962 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 19:51:44.985970 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-19 19:51:44.985978 | orchestrator | 2025-05-19 19:51:44.985985 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:51:44.985993 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-19 19:51:44.986002 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-19 19:51:44.986010 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-19 19:51:44.986044 | orchestrator | 2025-05-19 19:51:44.986052 | orchestrator | 2025-05-19 19:51:44.986060 | orchestrator | 2025-05-19 19:51:44.986067 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:51:44.986075 | orchestrator | Monday 19 May 2025 19:51:43 +0000 (0:00:18.204) 0:02:11.586 ************ 2025-05-19 19:51:44.986083 | orchestrator | =============================================================================== 2025-05-19 19:51:44.986091 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.64s 2025-05-19 19:51:44.986098 | orchestrator | generate keys ---------------------------------------------------------- 20.67s 2025-05-19 19:51:44.986106 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.20s 2025-05-19 19:51:44.986114 | orchestrator | get keys from monitors ------------------------------------------------- 10.17s 2025-05-19 19:51:44.986130 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.37s 2025-05-19 19:51:44.986138 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 2.06s 2025-05-19 19:51:44.986145 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.48s 2025-05-19 19:51:44.986153 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.47s 2025-05-19 19:51:44.986161 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.27s 2025-05-19 19:51:44.986168 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.14s 2025-05-19 19:51:44.986176 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 1.11s 2025-05-19 19:51:44.986184 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 1.05s 2025-05-19 19:51:44.986191 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.98s 2025-05-19 19:51:44.986199 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.98s 2025-05-19 19:51:44.986207 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.81s 2025-05-19 19:51:44.986215 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.79s 2025-05-19 19:51:44.986222 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.78s 2025-05-19 19:51:44.986230 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.78s 2025-05-19 19:51:44.986238 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.76s 2025-05-19 19:51:44.986245 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.74s 2025-05-19 19:51:44.986253 | orchestrator | 2025-05-19 19:51:44 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:51:44.986261 | orchestrator | 2025-05-19 19:51:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:48.014446 | orchestrator | 2025-05-19 19:51:48 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:51:48.014589 | orchestrator | 2025-05-19 19:51:48 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:51:48.015350 | orchestrator | 2025-05-19 19:51:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:48.016098 | orchestrator | 2025-05-19 19:51:48 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:51:48.016952 | orchestrator | 2025-05-19 19:51:48 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:51:48.017568 | orchestrator | 2025-05-19 19:51:48 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:51:48.017659 | orchestrator | 2025-05-19 19:51:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:51.049767 | orchestrator | 2025-05-19 19:51:51 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:51:51.050095 | orchestrator | 2025-05-19 19:51:51 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:51:51.051376 | orchestrator | 2025-05-19 19:51:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:51.052468 | orchestrator | 2025-05-19 19:51:51 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:51:51.054205 | orchestrator | 2025-05-19 19:51:51 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:51:51.058607 | orchestrator | 2025-05-19 19:51:51 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:51:51.058662 | orchestrator | 2025-05-19 19:51:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:54.112429 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:51:54.115115 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:51:54.117258 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:51:54.119571 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:54.121220 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:51:54.122566 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:51:54.124065 | orchestrator | 2025-05-19 19:51:54 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:51:54.124104 | orchestrator | 2025-05-19 19:51:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:51:57.181029 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:51:57.181155 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:51:57.181856 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:51:57.183144 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:51:57.183553 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:51:57.184375 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:51:57.185409 | orchestrator | 2025-05-19 19:51:57 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:51:57.185434 | orchestrator | 2025-05-19 19:51:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:00.237640 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:00.238498 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:00.239745 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:00.241462 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:00.242861 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:00.243851 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:00.245039 | orchestrator | 2025-05-19 19:52:00 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:00.245084 | orchestrator | 2025-05-19 19:52:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:03.291630 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:03.297343 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:03.302554 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:03.304151 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:03.306663 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:03.308961 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:03.311283 | orchestrator | 2025-05-19 19:52:03 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:03.312036 | orchestrator | 2025-05-19 19:52:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:06.353476 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:06.354096 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:06.355779 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:06.357638 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:06.358674 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:06.360181 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:06.361287 | orchestrator | 2025-05-19 19:52:06 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:06.361317 | orchestrator | 2025-05-19 19:52:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:09.411014 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:09.412055 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:09.413503 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:09.415552 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:09.417165 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:09.418421 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:09.419073 | orchestrator | 2025-05-19 19:52:09 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:09.419111 | orchestrator | 2025-05-19 19:52:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:12.483067 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:12.484455 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:12.487057 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:12.489713 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:12.491327 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:12.494154 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:12.495515 | orchestrator | 2025-05-19 19:52:12 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:12.495961 | orchestrator | 2025-05-19 19:52:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:15.549176 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:15.553094 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:15.562780 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:15.566207 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:15.568246 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:15.576072 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:15.576151 | orchestrator | 2025-05-19 19:52:15 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:15.576173 | orchestrator | 2025-05-19 19:52:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:18.623298 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:18.625434 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:18.627073 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:18.628766 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:18.630849 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:18.632593 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:18.636930 | orchestrator | 2025-05-19 19:52:18 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:18.637354 | orchestrator | 2025-05-19 19:52:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:21.676270 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:21.677656 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state STARTED 2025-05-19 19:52:21.678649 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:21.679803 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:21.682867 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:21.684474 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:21.685914 | orchestrator | 2025-05-19 19:52:21 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:21.686078 | orchestrator | 2025-05-19 19:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:24.733520 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:24.735159 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task cf23887e-7c58-4ed2-add1-e4ca96b21feb is in state SUCCESS 2025-05-19 19:52:24.736477 | orchestrator | 2025-05-19 19:52:24.736558 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-19 19:52:24.736574 | orchestrator | 2025-05-19 19:52:24.736587 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-19 19:52:24.736625 | orchestrator | 2025-05-19 19:52:24.736638 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-19 19:52:24.736650 | orchestrator | Monday 19 May 2025 19:51:55 +0000 (0:00:00.449) 0:00:00.449 ************ 2025-05-19 19:52:24.736660 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-19 19:52:24.736673 | orchestrator | 2025-05-19 19:52:24.736684 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-19 19:52:24.736695 | orchestrator | Monday 19 May 2025 19:51:55 +0000 (0:00:00.225) 0:00:00.674 ************ 2025-05-19 19:52:24.736707 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:52:24.736719 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 19:52:24.736730 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 19:52:24.736741 | orchestrator | 2025-05-19 19:52:24.736751 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-19 19:52:24.736762 | orchestrator | Monday 19 May 2025 19:51:56 +0000 (0:00:00.919) 0:00:01.594 ************ 2025-05-19 19:52:24.736773 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-19 19:52:24.736784 | orchestrator | 2025-05-19 19:52:24.736797 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-19 19:52:24.736844 | orchestrator | Monday 19 May 2025 19:51:56 +0000 (0:00:00.255) 0:00:01.850 ************ 2025-05-19 19:52:24.736865 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.736883 | orchestrator | 2025-05-19 19:52:24.736900 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-19 19:52:24.736917 | orchestrator | Monday 19 May 2025 19:51:57 +0000 (0:00:00.594) 0:00:02.444 ************ 2025-05-19 19:52:24.736934 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.736951 | orchestrator | 2025-05-19 19:52:24.736966 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-19 19:52:24.736981 | orchestrator | Monday 19 May 2025 19:51:57 +0000 (0:00:00.141) 0:00:02.586 ************ 2025-05-19 19:52:24.736996 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.737013 | orchestrator | 2025-05-19 19:52:24.737029 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-19 19:52:24.737044 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:00.473) 0:00:03.059 ************ 2025-05-19 19:52:24.737059 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.737075 | orchestrator | 2025-05-19 19:52:24.737093 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-19 19:52:24.737265 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:00.127) 0:00:03.186 ************ 2025-05-19 19:52:24.737288 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.737301 | orchestrator | 2025-05-19 19:52:24.737313 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-19 19:52:24.737326 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:00.131) 0:00:03.317 ************ 2025-05-19 19:52:24.737338 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.737350 | orchestrator | 2025-05-19 19:52:24.737363 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 19:52:24.737375 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:00.165) 0:00:03.483 ************ 2025-05-19 19:52:24.737388 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.737401 | orchestrator | 2025-05-19 19:52:24.737413 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 19:52:24.737426 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:00.134) 0:00:03.618 ************ 2025-05-19 19:52:24.737436 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.737447 | orchestrator | 2025-05-19 19:52:24.737458 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 19:52:24.737469 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:00.327) 0:00:03.946 ************ 2025-05-19 19:52:24.737480 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:52:24.737505 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:52:24.737516 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:52:24.737527 | orchestrator | 2025-05-19 19:52:24.737537 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-19 19:52:24.737548 | orchestrator | Monday 19 May 2025 19:51:59 +0000 (0:00:00.697) 0:00:04.644 ************ 2025-05-19 19:52:24.737559 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.737570 | orchestrator | 2025-05-19 19:52:24.737580 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-19 19:52:24.737591 | orchestrator | Monday 19 May 2025 19:51:59 +0000 (0:00:00.229) 0:00:04.874 ************ 2025-05-19 19:52:24.737602 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:52:24.737613 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:52:24.737624 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:52:24.737634 | orchestrator | 2025-05-19 19:52:24.737645 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-19 19:52:24.737656 | orchestrator | Monday 19 May 2025 19:52:01 +0000 (0:00:02.027) 0:00:06.901 ************ 2025-05-19 19:52:24.737667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:52:24.737678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:52:24.737689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:52:24.737700 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.737711 | orchestrator | 2025-05-19 19:52:24.737723 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-19 19:52:24.737753 | orchestrator | Monday 19 May 2025 19:52:02 +0000 (0:00:00.422) 0:00:07.324 ************ 2025-05-19 19:52:24.737768 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 19:52:24.737783 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 19:52:24.737856 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 19:52:24.737879 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.737899 | orchestrator | 2025-05-19 19:52:24.737917 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-19 19:52:24.737935 | orchestrator | Monday 19 May 2025 19:52:03 +0000 (0:00:00.863) 0:00:08.188 ************ 2025-05-19 19:52:24.737950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:52:24.737964 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:52:24.737984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 19:52:24.738005 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738103 | orchestrator | 2025-05-19 19:52:24.738120 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-19 19:52:24.738132 | orchestrator | Monday 19 May 2025 19:52:03 +0000 (0:00:00.179) 0:00:08.368 ************ 2025-05-19 19:52:24.738146 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '443ba7712a38', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 19:52:00.597809', 'end': '2025-05-19 19:52:00.641702', 'delta': '0:00:00.043893', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['443ba7712a38'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-19 19:52:24.738162 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '006e5e5b90be', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 19:52:01.162892', 'end': '2025-05-19 19:52:01.208947', 'delta': '0:00:00.046055', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['006e5e5b90be'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-19 19:52:24.738187 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': 'bb94dedf38a2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 19:52:01.733006', 'end': '2025-05-19 19:52:01.775379', 'delta': '0:00:00.042373', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bb94dedf38a2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-19 19:52:24.738198 | orchestrator | 2025-05-19 19:52:24.738210 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-19 19:52:24.738220 | orchestrator | Monday 19 May 2025 19:52:03 +0000 (0:00:00.228) 0:00:08.596 ************ 2025-05-19 19:52:24.738231 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.738242 | orchestrator | 2025-05-19 19:52:24.738253 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-19 19:52:24.738264 | orchestrator | Monday 19 May 2025 19:52:03 +0000 (0:00:00.254) 0:00:08.851 ************ 2025-05-19 19:52:24.738274 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-19 19:52:24.738285 | orchestrator | 2025-05-19 19:52:24.738296 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-19 19:52:24.738431 | orchestrator | Monday 19 May 2025 19:52:05 +0000 (0:00:01.694) 0:00:10.545 ************ 2025-05-19 19:52:24.738445 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738456 | orchestrator | 2025-05-19 19:52:24.738467 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-19 19:52:24.738488 | orchestrator | Monday 19 May 2025 19:52:05 +0000 (0:00:00.131) 0:00:10.677 ************ 2025-05-19 19:52:24.738499 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738510 | orchestrator | 2025-05-19 19:52:24.738521 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-19 19:52:24.738532 | orchestrator | Monday 19 May 2025 19:52:05 +0000 (0:00:00.260) 0:00:10.938 ************ 2025-05-19 19:52:24.738543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738554 | orchestrator | 2025-05-19 19:52:24.738565 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-19 19:52:24.738577 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.145) 0:00:11.083 ************ 2025-05-19 19:52:24.738587 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.738598 | orchestrator | 2025-05-19 19:52:24.738610 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-19 19:52:24.738621 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.139) 0:00:11.223 ************ 2025-05-19 19:52:24.738638 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738650 | orchestrator | 2025-05-19 19:52:24.738660 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-19 19:52:24.738671 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.243) 0:00:11.466 ************ 2025-05-19 19:52:24.738683 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738694 | orchestrator | 2025-05-19 19:52:24.738705 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-19 19:52:24.738716 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.127) 0:00:11.594 ************ 2025-05-19 19:52:24.738726 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738737 | orchestrator | 2025-05-19 19:52:24.738748 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-19 19:52:24.738759 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.119) 0:00:11.713 ************ 2025-05-19 19:52:24.738771 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738781 | orchestrator | 2025-05-19 19:52:24.738797 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-19 19:52:24.738857 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.129) 0:00:11.843 ************ 2025-05-19 19:52:24.738880 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738899 | orchestrator | 2025-05-19 19:52:24.738917 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 19:52:24.738935 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:00.129) 0:00:11.973 ************ 2025-05-19 19:52:24.738962 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.738983 | orchestrator | 2025-05-19 19:52:24.739001 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-19 19:52:24.739019 | orchestrator | Monday 19 May 2025 19:52:07 +0000 (0:00:00.412) 0:00:12.385 ************ 2025-05-19 19:52:24.739035 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739054 | orchestrator | 2025-05-19 19:52:24.739239 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 19:52:24.739262 | orchestrator | Monday 19 May 2025 19:52:07 +0000 (0:00:00.135) 0:00:12.521 ************ 2025-05-19 19:52:24.739275 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739288 | orchestrator | 2025-05-19 19:52:24.739300 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 19:52:24.739312 | orchestrator | Monday 19 May 2025 19:52:07 +0000 (0:00:00.137) 0:00:12.659 ************ 2025-05-19 19:52:24.739326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 19:52:24.739479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_4ee7cc0e-f0f1-4d11-af6e-2b98263e3f9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:52:24.739506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-18-49-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 19:52:24.739519 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739530 | orchestrator | 2025-05-19 19:52:24.739541 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-19 19:52:24.739552 | orchestrator | Monday 19 May 2025 19:52:07 +0000 (0:00:00.271) 0:00:12.930 ************ 2025-05-19 19:52:24.739564 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739575 | orchestrator | 2025-05-19 19:52:24.739586 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-19 19:52:24.739596 | orchestrator | Monday 19 May 2025 19:52:08 +0000 (0:00:00.265) 0:00:13.195 ************ 2025-05-19 19:52:24.739607 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739618 | orchestrator | 2025-05-19 19:52:24.739629 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-19 19:52:24.739645 | orchestrator | Monday 19 May 2025 19:52:08 +0000 (0:00:00.130) 0:00:13.325 ************ 2025-05-19 19:52:24.739656 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739667 | orchestrator | 2025-05-19 19:52:24.739678 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-19 19:52:24.739689 | orchestrator | Monday 19 May 2025 19:52:08 +0000 (0:00:00.130) 0:00:13.456 ************ 2025-05-19 19:52:24.739701 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.739712 | orchestrator | 2025-05-19 19:52:24.739723 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-19 19:52:24.739734 | orchestrator | Monday 19 May 2025 19:52:08 +0000 (0:00:00.517) 0:00:13.973 ************ 2025-05-19 19:52:24.739745 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.739756 | orchestrator | 2025-05-19 19:52:24.739767 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-19 19:52:24.739778 | orchestrator | Monday 19 May 2025 19:52:09 +0000 (0:00:00.126) 0:00:14.100 ************ 2025-05-19 19:52:24.739790 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.739801 | orchestrator | 2025-05-19 19:52:24.739812 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-19 19:52:24.739855 | orchestrator | Monday 19 May 2025 19:52:09 +0000 (0:00:00.486) 0:00:14.587 ************ 2025-05-19 19:52:24.739867 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.739878 | orchestrator | 2025-05-19 19:52:24.739889 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-19 19:52:24.739908 | orchestrator | Monday 19 May 2025 19:52:09 +0000 (0:00:00.157) 0:00:14.744 ************ 2025-05-19 19:52:24.739919 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739930 | orchestrator | 2025-05-19 19:52:24.739941 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-19 19:52:24.739955 | orchestrator | Monday 19 May 2025 19:52:10 +0000 (0:00:00.818) 0:00:15.563 ************ 2025-05-19 19:52:24.739973 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.739993 | orchestrator | 2025-05-19 19:52:24.740011 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-19 19:52:24.740029 | orchestrator | Monday 19 May 2025 19:52:10 +0000 (0:00:00.183) 0:00:15.747 ************ 2025-05-19 19:52:24.740047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:52:24.740064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:52:24.740083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:52:24.740124 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.740143 | orchestrator | 2025-05-19 19:52:24.740162 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-19 19:52:24.740183 | orchestrator | Monday 19 May 2025 19:52:11 +0000 (0:00:00.444) 0:00:16.192 ************ 2025-05-19 19:52:24.740202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:52:24.740349 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:52:24.740367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:52:24.740378 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.740389 | orchestrator | 2025-05-19 19:52:24.740484 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-19 19:52:24.740499 | orchestrator | Monday 19 May 2025 19:52:11 +0000 (0:00:00.469) 0:00:16.662 ************ 2025-05-19 19:52:24.740510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:52:24.740522 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 19:52:24.740533 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 19:52:24.740543 | orchestrator | 2025-05-19 19:52:24.740555 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-19 19:52:24.740566 | orchestrator | Monday 19 May 2025 19:52:12 +0000 (0:00:01.235) 0:00:17.897 ************ 2025-05-19 19:52:24.740577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:52:24.740588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:52:24.740599 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:52:24.740610 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.740621 | orchestrator | 2025-05-19 19:52:24.740632 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-19 19:52:24.740643 | orchestrator | Monday 19 May 2025 19:52:13 +0000 (0:00:00.217) 0:00:18.115 ************ 2025-05-19 19:52:24.740654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 19:52:24.740665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 19:52:24.740676 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 19:52:24.740687 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.740698 | orchestrator | 2025-05-19 19:52:24.740710 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-19 19:52:24.740721 | orchestrator | Monday 19 May 2025 19:52:13 +0000 (0:00:00.203) 0:00:18.318 ************ 2025-05-19 19:52:24.740732 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-19 19:52:24.740743 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-19 19:52:24.740755 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-19 19:52:24.740767 | orchestrator | 2025-05-19 19:52:24.740797 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-19 19:52:24.740809 | orchestrator | Monday 19 May 2025 19:52:13 +0000 (0:00:00.205) 0:00:18.523 ************ 2025-05-19 19:52:24.740847 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.740859 | orchestrator | 2025-05-19 19:52:24.740870 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-19 19:52:24.740881 | orchestrator | Monday 19 May 2025 19:52:13 +0000 (0:00:00.137) 0:00:18.661 ************ 2025-05-19 19:52:24.740892 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:52:24.740903 | orchestrator | 2025-05-19 19:52:24.740921 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-19 19:52:24.740933 | orchestrator | Monday 19 May 2025 19:52:13 +0000 (0:00:00.143) 0:00:18.804 ************ 2025-05-19 19:52:24.740944 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:52:24.740955 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:52:24.740966 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:52:24.740977 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 19:52:24.740987 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 19:52:24.740999 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 19:52:24.741010 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 19:52:24.741021 | orchestrator | 2025-05-19 19:52:24.741031 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-19 19:52:24.741042 | orchestrator | Monday 19 May 2025 19:52:15 +0000 (0:00:01.294) 0:00:20.098 ************ 2025-05-19 19:52:24.741053 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 19:52:24.741064 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 19:52:24.741075 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 19:52:24.741085 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 19:52:24.741096 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 19:52:24.741107 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 19:52:24.741117 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 19:52:24.741129 | orchestrator | 2025-05-19 19:52:24.741140 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-19 19:52:24.741151 | orchestrator | Monday 19 May 2025 19:52:16 +0000 (0:00:01.690) 0:00:21.789 ************ 2025-05-19 19:52:24.741161 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:52:24.741172 | orchestrator | 2025-05-19 19:52:24.741184 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-19 19:52:24.741194 | orchestrator | Monday 19 May 2025 19:52:17 +0000 (0:00:00.466) 0:00:22.255 ************ 2025-05-19 19:52:24.741205 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:52:24.741216 | orchestrator | 2025-05-19 19:52:24.741228 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-19 19:52:24.741240 | orchestrator | Monday 19 May 2025 19:52:17 +0000 (0:00:00.630) 0:00:22.886 ************ 2025-05-19 19:52:24.741259 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-19 19:52:24.741271 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-19 19:52:24.741282 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-19 19:52:24.741292 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-19 19:52:24.741311 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-19 19:52:24.741322 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-19 19:52:24.741333 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-19 19:52:24.741344 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-19 19:52:24.741355 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-19 19:52:24.741366 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-19 19:52:24.741377 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-19 19:52:24.741387 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-19 19:52:24.741398 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-19 19:52:24.741409 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-19 19:52:24.741420 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-19 19:52:24.741431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-19 19:52:24.741441 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-19 19:52:24.741452 | orchestrator | 2025-05-19 19:52:24.741463 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:52:24.741475 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-19 19:52:24.741487 | orchestrator | 2025-05-19 19:52:24.741498 | orchestrator | 2025-05-19 19:52:24.741509 | orchestrator | 2025-05-19 19:52:24.741520 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:52:24.741531 | orchestrator | Monday 19 May 2025 19:52:23 +0000 (0:00:06.022) 0:00:28.909 ************ 2025-05-19 19:52:24.741542 | orchestrator | =============================================================================== 2025-05-19 19:52:24.741558 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 6.02s 2025-05-19 19:52:24.741569 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.03s 2025-05-19 19:52:24.741580 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.69s 2025-05-19 19:52:24.741592 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.69s 2025-05-19 19:52:24.741602 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.29s 2025-05-19 19:52:24.741613 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.24s 2025-05-19 19:52:24.741624 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.92s 2025-05-19 19:52:24.741635 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.86s 2025-05-19 19:52:24.741646 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.82s 2025-05-19 19:52:24.741657 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2025-05-19 19:52:24.741667 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.63s 2025-05-19 19:52:24.741679 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.59s 2025-05-19 19:52:24.741690 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.52s 2025-05-19 19:52:24.741701 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.49s 2025-05-19 19:52:24.741712 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.47s 2025-05-19 19:52:24.741723 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.47s 2025-05-19 19:52:24.741734 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.47s 2025-05-19 19:52:24.741752 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.44s 2025-05-19 19:52:24.741763 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-05-19 19:52:24.741773 | orchestrator | ceph-facts : set_fact build dedicated_devices from resolved symlinks ---- 0.41s 2025-05-19 19:52:24.741785 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:24.741796 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:24.741807 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:24.741844 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:24.741870 | orchestrator | 2025-05-19 19:52:24 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:24.741882 | orchestrator | 2025-05-19 19:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:27.800060 | orchestrator | 2025-05-19 19:52:27 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:27.800308 | orchestrator | 2025-05-19 19:52:27 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:27.801128 | orchestrator | 2025-05-19 19:52:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:27.801858 | orchestrator | 2025-05-19 19:52:27 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:27.802724 | orchestrator | 2025-05-19 19:52:27 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state STARTED 2025-05-19 19:52:27.803941 | orchestrator | 2025-05-19 19:52:27 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:27.803982 | orchestrator | 2025-05-19 19:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:30.867994 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:30.870853 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:30.872555 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:30.873148 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:30.873894 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:30.874362 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task 4ff26df9-2827-4ea6-995c-378c8f6cdef7 is in state SUCCESS 2025-05-19 19:52:30.875592 | orchestrator | 2025-05-19 19:52:30 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:30.875624 | orchestrator | 2025-05-19 19:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:33.915993 | orchestrator | 2025-05-19 19:52:33 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:33.916118 | orchestrator | 2025-05-19 19:52:33 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:33.919654 | orchestrator | 2025-05-19 19:52:33 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:33.920125 | orchestrator | 2025-05-19 19:52:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:33.920300 | orchestrator | 2025-05-19 19:52:33 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:33.921359 | orchestrator | 2025-05-19 19:52:33 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:33.921453 | orchestrator | 2025-05-19 19:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:36.964494 | orchestrator | 2025-05-19 19:52:36 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:36.964591 | orchestrator | 2025-05-19 19:52:36 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:36.966863 | orchestrator | 2025-05-19 19:52:36 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:36.969658 | orchestrator | 2025-05-19 19:52:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:36.969732 | orchestrator | 2025-05-19 19:52:36 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state STARTED 2025-05-19 19:52:36.973004 | orchestrator | 2025-05-19 19:52:36 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:36.973063 | orchestrator | 2025-05-19 19:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:40.005220 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:40.008632 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:40.008918 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:40.009247 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:40.012076 | orchestrator | 2025-05-19 19:52:40.012275 | orchestrator | 2025-05-19 19:52:40.012305 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-19 19:52:40.012327 | orchestrator | 2025-05-19 19:52:40.012345 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-19 19:52:40.012364 | orchestrator | Monday 19 May 2025 19:51:47 +0000 (0:00:00.132) 0:00:00.132 ************ 2025-05-19 19:52:40.012381 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-19 19:52:40.012398 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 19:52:40.012416 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 19:52:40.012433 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 19:52:40.012450 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 19:52:40.012468 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-19 19:52:40.012485 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-19 19:52:40.012501 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-19 19:52:40.012519 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-19 19:52:40.012536 | orchestrator | 2025-05-19 19:52:40.012554 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-19 19:52:40.012571 | orchestrator | Monday 19 May 2025 19:51:49 +0000 (0:00:02.639) 0:00:02.772 ************ 2025-05-19 19:52:40.012588 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-19 19:52:40.012606 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 19:52:40.012624 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 19:52:40.012641 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 19:52:40.012715 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 19:52:40.012734 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-19 19:52:40.012751 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-19 19:52:40.012769 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-19 19:52:40.012813 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-19 19:52:40.012830 | orchestrator | 2025-05-19 19:52:40.012847 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-19 19:52:40.012877 | orchestrator | Monday 19 May 2025 19:51:50 +0000 (0:00:00.253) 0:00:03.025 ************ 2025-05-19 19:52:40.012896 | orchestrator | ok: [testbed-manager] => { 2025-05-19 19:52:40.012915 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-19 19:52:40.012935 | orchestrator | } 2025-05-19 19:52:40.012954 | orchestrator | 2025-05-19 19:52:40.012972 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-19 19:52:40.012989 | orchestrator | Monday 19 May 2025 19:51:50 +0000 (0:00:00.146) 0:00:03.172 ************ 2025-05-19 19:52:40.013006 | orchestrator | changed: [testbed-manager] 2025-05-19 19:52:40.013024 | orchestrator | 2025-05-19 19:52:40.013041 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-19 19:52:40.013058 | orchestrator | Monday 19 May 2025 19:52:24 +0000 (0:00:34.460) 0:00:37.633 ************ 2025-05-19 19:52:40.013076 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-19 19:52:40.013093 | orchestrator | 2025-05-19 19:52:40.013110 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-19 19:52:40.013128 | orchestrator | Monday 19 May 2025 19:52:25 +0000 (0:00:00.534) 0:00:38.167 ************ 2025-05-19 19:52:40.013146 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-19 19:52:40.013163 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-19 19:52:40.013180 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-19 19:52:40.013198 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-19 19:52:40.013216 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-19 19:52:40.013254 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-19 19:52:40.013274 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-19 19:52:40.013293 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-19 19:52:40.013313 | orchestrator | 2025-05-19 19:52:40.013332 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-19 19:52:40.013365 | orchestrator | Monday 19 May 2025 19:52:28 +0000 (0:00:02.768) 0:00:40.936 ************ 2025-05-19 19:52:40.013383 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:52:40.013400 | orchestrator | 2025-05-19 19:52:40.013417 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:52:40.013434 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:52:40.013451 | orchestrator | 2025-05-19 19:52:40.013467 | orchestrator | Monday 19 May 2025 19:52:28 +0000 (0:00:00.020) 0:00:40.957 ************ 2025-05-19 19:52:40.013484 | orchestrator | =============================================================================== 2025-05-19 19:52:40.013501 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 34.46s 2025-05-19 19:52:40.013517 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.77s 2025-05-19 19:52:40.013534 | orchestrator | Check ceph keys --------------------------------------------------------- 2.64s 2025-05-19 19:52:40.013552 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.53s 2025-05-19 19:52:40.013571 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.25s 2025-05-19 19:52:40.013591 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.15s 2025-05-19 19:52:40.013608 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.02s 2025-05-19 19:52:40.013624 | orchestrator | 2025-05-19 19:52:40.013641 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:40.013658 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task 66ab76df-314c-4ac1-b7d8-d6ba7b12c2d0 is in state SUCCESS 2025-05-19 19:52:40.013675 | orchestrator | 2025-05-19 19:52:40 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:40.013700 | orchestrator | 2025-05-19 19:52:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:43.047990 | orchestrator | 2025-05-19 19:52:43 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:43.051347 | orchestrator | 2025-05-19 19:52:43 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:43.051646 | orchestrator | 2025-05-19 19:52:43 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:43.053543 | orchestrator | 2025-05-19 19:52:43 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:43.054608 | orchestrator | 2025-05-19 19:52:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:43.054627 | orchestrator | 2025-05-19 19:52:43 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:43.054633 | orchestrator | 2025-05-19 19:52:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:46.082097 | orchestrator | 2025-05-19 19:52:46 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:46.082189 | orchestrator | 2025-05-19 19:52:46 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:46.082402 | orchestrator | 2025-05-19 19:52:46 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:46.082981 | orchestrator | 2025-05-19 19:52:46 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:46.083700 | orchestrator | 2025-05-19 19:52:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:46.084059 | orchestrator | 2025-05-19 19:52:46 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:46.084095 | orchestrator | 2025-05-19 19:52:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:49.123470 | orchestrator | 2025-05-19 19:52:49 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:49.123587 | orchestrator | 2025-05-19 19:52:49 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:49.123604 | orchestrator | 2025-05-19 19:52:49 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:49.124236 | orchestrator | 2025-05-19 19:52:49 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:49.126081 | orchestrator | 2025-05-19 19:52:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:49.126505 | orchestrator | 2025-05-19 19:52:49 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:49.126538 | orchestrator | 2025-05-19 19:52:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:52.154332 | orchestrator | 2025-05-19 19:52:52 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:52.154440 | orchestrator | 2025-05-19 19:52:52 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:52.154791 | orchestrator | 2025-05-19 19:52:52 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:52.155140 | orchestrator | 2025-05-19 19:52:52 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:52.155637 | orchestrator | 2025-05-19 19:52:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:52.156149 | orchestrator | 2025-05-19 19:52:52 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:52.156172 | orchestrator | 2025-05-19 19:52:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:55.184545 | orchestrator | 2025-05-19 19:52:55 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:55.184681 | orchestrator | 2025-05-19 19:52:55 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:55.185014 | orchestrator | 2025-05-19 19:52:55 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:55.185419 | orchestrator | 2025-05-19 19:52:55 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:55.187044 | orchestrator | 2025-05-19 19:52:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:55.187117 | orchestrator | 2025-05-19 19:52:55 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:55.187144 | orchestrator | 2025-05-19 19:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:52:58.213681 | orchestrator | 2025-05-19 19:52:58 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:52:58.213878 | orchestrator | 2025-05-19 19:52:58 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:52:58.214195 | orchestrator | 2025-05-19 19:52:58 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:52:58.214777 | orchestrator | 2025-05-19 19:52:58 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:52:58.215221 | orchestrator | 2025-05-19 19:52:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:52:58.216048 | orchestrator | 2025-05-19 19:52:58 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:52:58.216103 | orchestrator | 2025-05-19 19:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:01.246379 | orchestrator | 2025-05-19 19:53:01 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:53:01.246523 | orchestrator | 2025-05-19 19:53:01 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:01.246552 | orchestrator | 2025-05-19 19:53:01 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:01.246574 | orchestrator | 2025-05-19 19:53:01 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:01.246586 | orchestrator | 2025-05-19 19:53:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:01.246597 | orchestrator | 2025-05-19 19:53:01 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:01.246608 | orchestrator | 2025-05-19 19:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:04.273585 | orchestrator | 2025-05-19 19:53:04 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:53:04.273678 | orchestrator | 2025-05-19 19:53:04 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:04.274001 | orchestrator | 2025-05-19 19:53:04 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:04.274562 | orchestrator | 2025-05-19 19:53:04 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:04.275035 | orchestrator | 2025-05-19 19:53:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:04.275501 | orchestrator | 2025-05-19 19:53:04 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:04.275576 | orchestrator | 2025-05-19 19:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:07.306456 | orchestrator | 2025-05-19 19:53:07 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:53:07.306760 | orchestrator | 2025-05-19 19:53:07 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:07.306952 | orchestrator | 2025-05-19 19:53:07 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:07.307606 | orchestrator | 2025-05-19 19:53:07 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:07.308460 | orchestrator | 2025-05-19 19:53:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:07.309424 | orchestrator | 2025-05-19 19:53:07 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:07.309472 | orchestrator | 2025-05-19 19:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:10.338315 | orchestrator | 2025-05-19 19:53:10 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:53:10.338427 | orchestrator | 2025-05-19 19:53:10 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:10.338448 | orchestrator | 2025-05-19 19:53:10 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:10.338943 | orchestrator | 2025-05-19 19:53:10 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:10.340148 | orchestrator | 2025-05-19 19:53:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:10.340181 | orchestrator | 2025-05-19 19:53:10 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:10.340209 | orchestrator | 2025-05-19 19:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:13.370376 | orchestrator | 2025-05-19 19:53:13 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state STARTED 2025-05-19 19:53:13.370638 | orchestrator | 2025-05-19 19:53:13 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:13.370686 | orchestrator | 2025-05-19 19:53:13 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:13.371133 | orchestrator | 2025-05-19 19:53:13 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:13.371621 | orchestrator | 2025-05-19 19:53:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:13.372073 | orchestrator | 2025-05-19 19:53:13 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:13.372095 | orchestrator | 2025-05-19 19:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:16.395510 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task fd3bac35-bb90-448d-8941-cb535a2d84ed is in state SUCCESS 2025-05-19 19:53:16.395639 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:16.397524 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:16.397929 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:16.398412 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:16.399159 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:16.402187 | orchestrator | 2025-05-19 19:53:16 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:16.402271 | orchestrator | 2025-05-19 19:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:19.455466 | orchestrator | 2025-05-19 19:53:19 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:19.455571 | orchestrator | 2025-05-19 19:53:19 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:19.455875 | orchestrator | 2025-05-19 19:53:19 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:19.460504 | orchestrator | 2025-05-19 19:53:19 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:19.461011 | orchestrator | 2025-05-19 19:53:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:19.461839 | orchestrator | 2025-05-19 19:53:19 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:19.461864 | orchestrator | 2025-05-19 19:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:22.503375 | orchestrator | 2025-05-19 19:53:22 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:22.503482 | orchestrator | 2025-05-19 19:53:22 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:22.503641 | orchestrator | 2025-05-19 19:53:22 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:22.504297 | orchestrator | 2025-05-19 19:53:22 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:22.505333 | orchestrator | 2025-05-19 19:53:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:22.505886 | orchestrator | 2025-05-19 19:53:22 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:22.505975 | orchestrator | 2025-05-19 19:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:25.543388 | orchestrator | 2025-05-19 19:53:25 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:25.543505 | orchestrator | 2025-05-19 19:53:25 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:25.543534 | orchestrator | 2025-05-19 19:53:25 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:25.544219 | orchestrator | 2025-05-19 19:53:25 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:25.545057 | orchestrator | 2025-05-19 19:53:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:25.546051 | orchestrator | 2025-05-19 19:53:25 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:25.546120 | orchestrator | 2025-05-19 19:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:28.586113 | orchestrator | 2025-05-19 19:53:28 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:28.586305 | orchestrator | 2025-05-19 19:53:28 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:28.588257 | orchestrator | 2025-05-19 19:53:28 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:28.588756 | orchestrator | 2025-05-19 19:53:28 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:28.589638 | orchestrator | 2025-05-19 19:53:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:28.590136 | orchestrator | 2025-05-19 19:53:28 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:28.590349 | orchestrator | 2025-05-19 19:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:31.635062 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:31.637017 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:31.638987 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:31.640153 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:31.640853 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:31.641819 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task 387ee307-0233-4af6-9c3d-a37637e42eb9 is in state STARTED 2025-05-19 19:53:31.642844 | orchestrator | 2025-05-19 19:53:31 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:31.642875 | orchestrator | 2025-05-19 19:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:34.670605 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:34.671116 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:34.672282 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:34.675483 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:34.679375 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:34.680074 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task 387ee307-0233-4af6-9c3d-a37637e42eb9 is in state STARTED 2025-05-19 19:53:34.682384 | orchestrator | 2025-05-19 19:53:34 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:34.682448 | orchestrator | 2025-05-19 19:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:37.714315 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:37.715621 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:37.716667 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:37.717967 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:37.718507 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:37.719209 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task 387ee307-0233-4af6-9c3d-a37637e42eb9 is in state STARTED 2025-05-19 19:53:37.721200 | orchestrator | 2025-05-19 19:53:37 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:37.721280 | orchestrator | 2025-05-19 19:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:40.758700 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:40.759443 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:40.761709 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:40.762871 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:40.763336 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:40.764259 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task 387ee307-0233-4af6-9c3d-a37637e42eb9 is in state SUCCESS 2025-05-19 19:53:40.765279 | orchestrator | 2025-05-19 19:53:40 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:40.765326 | orchestrator | 2025-05-19 19:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:43.800436 | orchestrator | 2025-05-19 19:53:43 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:43.801853 | orchestrator | 2025-05-19 19:53:43 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:43.803474 | orchestrator | 2025-05-19 19:53:43 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:43.805495 | orchestrator | 2025-05-19 19:53:43 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:43.807095 | orchestrator | 2025-05-19 19:53:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:43.808987 | orchestrator | 2025-05-19 19:53:43 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:43.809039 | orchestrator | 2025-05-19 19:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:46.857521 | orchestrator | 2025-05-19 19:53:46 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:46.857766 | orchestrator | 2025-05-19 19:53:46 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:46.857829 | orchestrator | 2025-05-19 19:53:46 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:46.858078 | orchestrator | 2025-05-19 19:53:46 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:46.858889 | orchestrator | 2025-05-19 19:53:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:46.859866 | orchestrator | 2025-05-19 19:53:46 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:46.859903 | orchestrator | 2025-05-19 19:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:49.920994 | orchestrator | 2025-05-19 19:53:49 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:49.921109 | orchestrator | 2025-05-19 19:53:49 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:49.921782 | orchestrator | 2025-05-19 19:53:49 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:49.922567 | orchestrator | 2025-05-19 19:53:49 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state STARTED 2025-05-19 19:53:49.924142 | orchestrator | 2025-05-19 19:53:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:49.924634 | orchestrator | 2025-05-19 19:53:49 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:49.924834 | orchestrator | 2025-05-19 19:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:52.968668 | orchestrator | 2025-05-19 19:53:52 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:52.968752 | orchestrator | 2025-05-19 19:53:52 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:52.968930 | orchestrator | 2025-05-19 19:53:52 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:52.969471 | orchestrator | 2025-05-19 19:53:52 | INFO  | Task 7d9794da-30bd-4d2f-8b60-540d7164f248 is in state SUCCESS 2025-05-19 19:53:52.970183 | orchestrator | 2025-05-19 19:53:52.970199 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-19 19:53:52.970205 | orchestrator | 2025-05-19 19:53:52.970211 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-19 19:53:52.970217 | orchestrator | Monday 19 May 2025 19:51:45 +0000 (0:00:00.155) 0:00:00.155 ************ 2025-05-19 19:53:52.970222 | orchestrator | changed: [localhost] 2025-05-19 19:53:52.970230 | orchestrator | 2025-05-19 19:53:52.970236 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-19 19:53:52.970241 | orchestrator | Monday 19 May 2025 19:51:46 +0000 (0:00:00.841) 0:00:00.996 ************ 2025-05-19 19:53:52.970246 | orchestrator | changed: [localhost] 2025-05-19 19:53:52.970252 | orchestrator | 2025-05-19 19:53:52.970271 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-19 19:53:52.970277 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:45.867) 0:00:46.864 ************ 2025-05-19 19:53:52.970282 | orchestrator | changed: [localhost] 2025-05-19 19:53:52.970288 | orchestrator | 2025-05-19 19:53:52.970293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:53:52.970299 | orchestrator | 2025-05-19 19:53:52.970304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:53:52.970309 | orchestrator | Monday 19 May 2025 19:52:36 +0000 (0:00:03.844) 0:00:50.708 ************ 2025-05-19 19:53:52.970315 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:53:52.970320 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:53:52.970325 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:53:52.970331 | orchestrator | 2025-05-19 19:53:52.970336 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:53:52.970360 | orchestrator | Monday 19 May 2025 19:52:36 +0000 (0:00:00.417) 0:00:51.126 ************ 2025-05-19 19:53:52.970365 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-19 19:53:52.970370 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-19 19:53:52.970376 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-19 19:53:52.970381 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-19 19:53:52.970386 | orchestrator | 2025-05-19 19:53:52.970391 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-19 19:53:52.970396 | orchestrator | skipping: no hosts matched 2025-05-19 19:53:52.970401 | orchestrator | 2025-05-19 19:53:52.970406 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:53:52.970412 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:52.970420 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:52.970428 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:52.970433 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:52.970438 | orchestrator | 2025-05-19 19:53:52.970443 | orchestrator | 2025-05-19 19:53:52.970448 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:53:52.970453 | orchestrator | Monday 19 May 2025 19:52:37 +0000 (0:00:00.457) 0:00:51.583 ************ 2025-05-19 19:53:52.970458 | orchestrator | =============================================================================== 2025-05-19 19:53:52.970463 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 45.87s 2025-05-19 19:53:52.970469 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.84s 2025-05-19 19:53:52.970474 | orchestrator | Ensure the destination directory exists --------------------------------- 0.84s 2025-05-19 19:53:52.970479 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2025-05-19 19:53:52.970484 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-05-19 19:53:52.970489 | orchestrator | 2025-05-19 19:53:52.970494 | orchestrator | 2025-05-19 19:53:52.970499 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-19 19:53:52.970504 | orchestrator | 2025-05-19 19:53:52.970509 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-19 19:53:52.970514 | orchestrator | Monday 19 May 2025 19:52:30 +0000 (0:00:00.128) 0:00:00.128 ************ 2025-05-19 19:53:52.970519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-19 19:53:52.970524 | orchestrator | 2025-05-19 19:53:52.970529 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-19 19:53:52.970534 | orchestrator | Monday 19 May 2025 19:52:31 +0000 (0:00:00.162) 0:00:00.291 ************ 2025-05-19 19:53:52.970539 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-19 19:53:52.970545 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-19 19:53:52.970550 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-19 19:53:52.970555 | orchestrator | 2025-05-19 19:53:52.970560 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-19 19:53:52.970566 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:01.009) 0:00:01.301 ************ 2025-05-19 19:53:52.970571 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-19 19:53:52.970576 | orchestrator | 2025-05-19 19:53:52.970587 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-19 19:53:52.970592 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:00.864) 0:00:02.165 ************ 2025-05-19 19:53:52.970635 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:52.970641 | orchestrator | 2025-05-19 19:53:52.970646 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-19 19:53:52.970651 | orchestrator | Monday 19 May 2025 19:52:33 +0000 (0:00:00.680) 0:00:02.846 ************ 2025-05-19 19:53:52.970656 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:52.970661 | orchestrator | 2025-05-19 19:53:52.970666 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-19 19:53:52.970671 | orchestrator | Monday 19 May 2025 19:52:34 +0000 (0:00:00.748) 0:00:03.594 ************ 2025-05-19 19:53:52.970676 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-19 19:53:52.970681 | orchestrator | ok: [testbed-manager] 2025-05-19 19:53:52.970686 | orchestrator | 2025-05-19 19:53:52.970695 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-19 19:53:52.970700 | orchestrator | Monday 19 May 2025 19:53:07 +0000 (0:00:33.270) 0:00:36.864 ************ 2025-05-19 19:53:52.970705 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-19 19:53:52.970710 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-19 19:53:52.970715 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-19 19:53:52.970720 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-19 19:53:52.970725 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-19 19:53:52.970730 | orchestrator | 2025-05-19 19:53:52.970735 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-19 19:53:52.970740 | orchestrator | Monday 19 May 2025 19:53:10 +0000 (0:00:03.218) 0:00:40.083 ************ 2025-05-19 19:53:52.970745 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-19 19:53:52.970750 | orchestrator | 2025-05-19 19:53:52.970755 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-19 19:53:52.970760 | orchestrator | Monday 19 May 2025 19:53:11 +0000 (0:00:00.347) 0:00:40.430 ************ 2025-05-19 19:53:52.970765 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:53:52.970770 | orchestrator | 2025-05-19 19:53:52.970775 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-19 19:53:52.970780 | orchestrator | Monday 19 May 2025 19:53:11 +0000 (0:00:00.087) 0:00:40.518 ************ 2025-05-19 19:53:52.970785 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:53:52.970790 | orchestrator | 2025-05-19 19:53:52.970795 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-19 19:53:52.970800 | orchestrator | Monday 19 May 2025 19:53:11 +0000 (0:00:00.227) 0:00:40.745 ************ 2025-05-19 19:53:52.970805 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:52.970810 | orchestrator | 2025-05-19 19:53:52.970816 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-19 19:53:52.970822 | orchestrator | Monday 19 May 2025 19:53:12 +0000 (0:00:01.118) 0:00:41.864 ************ 2025-05-19 19:53:52.970828 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:52.970833 | orchestrator | 2025-05-19 19:53:52.970839 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-19 19:53:52.970844 | orchestrator | Monday 19 May 2025 19:53:13 +0000 (0:00:00.912) 0:00:42.776 ************ 2025-05-19 19:53:52.970850 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:52.970856 | orchestrator | 2025-05-19 19:53:52.970862 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-19 19:53:52.970868 | orchestrator | Monday 19 May 2025 19:53:13 +0000 (0:00:00.439) 0:00:43.216 ************ 2025-05-19 19:53:52.970874 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-19 19:53:52.970880 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-19 19:53:52.970886 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-19 19:53:52.970897 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-19 19:53:52.970903 | orchestrator | 2025-05-19 19:53:52.970909 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:53:52.970914 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 19:53:52.970919 | orchestrator | 2025-05-19 19:53:52.970924 | orchestrator | Monday 19 May 2025 19:53:15 +0000 (0:00:01.133) 0:00:44.349 ************ 2025-05-19 19:53:52.970929 | orchestrator | =============================================================================== 2025-05-19 19:53:52.970934 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 33.27s 2025-05-19 19:53:52.970939 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.22s 2025-05-19 19:53:52.970944 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.13s 2025-05-19 19:53:52.970949 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.12s 2025-05-19 19:53:52.970954 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.01s 2025-05-19 19:53:52.970959 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.91s 2025-05-19 19:53:52.970964 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 0.86s 2025-05-19 19:53:52.970969 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.75s 2025-05-19 19:53:52.970974 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.68s 2025-05-19 19:53:52.970979 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.44s 2025-05-19 19:53:52.970984 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.35s 2025-05-19 19:53:52.970989 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.23s 2025-05-19 19:53:52.970994 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.16s 2025-05-19 19:53:52.970999 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2025-05-19 19:53:52.971004 | orchestrator | 2025-05-19 19:53:52.971009 | orchestrator | None 2025-05-19 19:53:52.971234 | orchestrator | 2025-05-19 19:53:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:52.972049 | orchestrator | 2025-05-19 19:53:52 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:52.972139 | orchestrator | 2025-05-19 19:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:56.012393 | orchestrator | 2025-05-19 19:53:56 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:56.012578 | orchestrator | 2025-05-19 19:53:56 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:56.013367 | orchestrator | 2025-05-19 19:53:56 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:56.013818 | orchestrator | 2025-05-19 19:53:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:56.014493 | orchestrator | 2025-05-19 19:53:56 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state STARTED 2025-05-19 19:53:56.014569 | orchestrator | 2025-05-19 19:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:59.048773 | orchestrator | 2025-05-19 19:53:59 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:53:59.048866 | orchestrator | 2025-05-19 19:53:59 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:53:59.049462 | orchestrator | 2025-05-19 19:53:59 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:53:59.050522 | orchestrator | 2025-05-19 19:53:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:53:59.051041 | orchestrator | 2025-05-19 19:53:59 | INFO  | Task 1bf26f6e-5ecf-4a2c-b773-8e8802850fae is in state SUCCESS 2025-05-19 19:53:59.051171 | orchestrator | 2025-05-19 19:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:53:59.052490 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-19 19:53:59.052519 | orchestrator | 2025-05-19 19:53:59.052530 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-19 19:53:59.052540 | orchestrator | 2025-05-19 19:53:59.052546 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-19 19:53:59.052553 | orchestrator | Monday 19 May 2025 19:53:18 +0000 (0:00:00.340) 0:00:00.340 ************ 2025-05-19 19:53:59.052559 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052565 | orchestrator | 2025-05-19 19:53:59.052570 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-19 19:53:59.052576 | orchestrator | Monday 19 May 2025 19:53:20 +0000 (0:00:02.008) 0:00:02.348 ************ 2025-05-19 19:53:59.052581 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052615 | orchestrator | 2025-05-19 19:53:59.052621 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-19 19:53:59.052626 | orchestrator | Monday 19 May 2025 19:53:20 +0000 (0:00:00.882) 0:00:03.231 ************ 2025-05-19 19:53:59.052631 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052636 | orchestrator | 2025-05-19 19:53:59.052642 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-19 19:53:59.052647 | orchestrator | Monday 19 May 2025 19:53:21 +0000 (0:00:00.883) 0:00:04.115 ************ 2025-05-19 19:53:59.052652 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052657 | orchestrator | 2025-05-19 19:53:59.052662 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-19 19:53:59.052668 | orchestrator | Monday 19 May 2025 19:53:22 +0000 (0:00:01.044) 0:00:05.160 ************ 2025-05-19 19:53:59.052673 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052678 | orchestrator | 2025-05-19 19:53:59.052683 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-19 19:53:59.052689 | orchestrator | Monday 19 May 2025 19:53:23 +0000 (0:00:00.937) 0:00:06.097 ************ 2025-05-19 19:53:59.052694 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052699 | orchestrator | 2025-05-19 19:53:59.052704 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-19 19:53:59.052709 | orchestrator | Monday 19 May 2025 19:53:24 +0000 (0:00:00.853) 0:00:06.951 ************ 2025-05-19 19:53:59.052714 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052719 | orchestrator | 2025-05-19 19:53:59.052724 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-19 19:53:59.052729 | orchestrator | Monday 19 May 2025 19:53:26 +0000 (0:00:02.164) 0:00:09.116 ************ 2025-05-19 19:53:59.052734 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052739 | orchestrator | 2025-05-19 19:53:59.052744 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-19 19:53:59.052749 | orchestrator | Monday 19 May 2025 19:53:27 +0000 (0:00:01.058) 0:00:10.175 ************ 2025-05-19 19:53:59.052754 | orchestrator | changed: [testbed-manager] 2025-05-19 19:53:59.052759 | orchestrator | 2025-05-19 19:53:59.052764 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-19 19:53:59.052769 | orchestrator | Monday 19 May 2025 19:53:44 +0000 (0:00:16.907) 0:00:27.082 ************ 2025-05-19 19:53:59.052774 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:53:59.052779 | orchestrator | 2025-05-19 19:53:59.052784 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 19:53:59.052789 | orchestrator | 2025-05-19 19:53:59.052794 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 19:53:59.052799 | orchestrator | Monday 19 May 2025 19:53:45 +0000 (0:00:00.831) 0:00:27.913 ************ 2025-05-19 19:53:59.053058 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.053068 | orchestrator | 2025-05-19 19:53:59.053073 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 19:53:59.053078 | orchestrator | 2025-05-19 19:53:59.053083 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 19:53:59.053088 | orchestrator | Monday 19 May 2025 19:53:47 +0000 (0:00:02.349) 0:00:30.263 ************ 2025-05-19 19:53:59.053093 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:53:59.053098 | orchestrator | 2025-05-19 19:53:59.053103 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 19:53:59.053108 | orchestrator | 2025-05-19 19:53:59.053113 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 19:53:59.053130 | orchestrator | Monday 19 May 2025 19:53:49 +0000 (0:00:01.684) 0:00:31.948 ************ 2025-05-19 19:53:59.053135 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:53:59.053141 | orchestrator | 2025-05-19 19:53:59.053146 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:53:59.053153 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 19:53:59.053160 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:59.053165 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:59.053170 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:53:59.053175 | orchestrator | 2025-05-19 19:53:59.053180 | orchestrator | 2025-05-19 19:53:59.053185 | orchestrator | 2025-05-19 19:53:59.053190 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:53:59.053196 | orchestrator | Monday 19 May 2025 19:53:51 +0000 (0:00:01.793) 0:00:33.742 ************ 2025-05-19 19:53:59.053201 | orchestrator | =============================================================================== 2025-05-19 19:53:59.053206 | orchestrator | Create admin user ------------------------------------------------------ 16.91s 2025-05-19 19:53:59.053230 | orchestrator | Restart ceph manager service -------------------------------------------- 5.83s 2025-05-19 19:53:59.053236 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.16s 2025-05-19 19:53:59.053241 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.01s 2025-05-19 19:53:59.053246 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.06s 2025-05-19 19:53:59.053251 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2025-05-19 19:53:59.053256 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.94s 2025-05-19 19:53:59.053261 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.88s 2025-05-19 19:53:59.053266 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.88s 2025-05-19 19:53:59.053271 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.85s 2025-05-19 19:53:59.053276 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.83s 2025-05-19 19:53:59.053281 | orchestrator | 2025-05-19 19:53:59.053287 | orchestrator | 2025-05-19 19:53:59.053291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:53:59.053296 | orchestrator | 2025-05-19 19:53:59.053301 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:53:59.053306 | orchestrator | Monday 19 May 2025 19:51:46 +0000 (0:00:00.309) 0:00:00.309 ************ 2025-05-19 19:53:59.053312 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:53:59.053318 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:53:59.053323 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:53:59.053333 | orchestrator | 2025-05-19 19:53:59.053338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:53:59.053343 | orchestrator | Monday 19 May 2025 19:51:46 +0000 (0:00:00.397) 0:00:00.706 ************ 2025-05-19 19:53:59.053349 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-19 19:53:59.053355 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-19 19:53:59.053360 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-19 19:53:59.053365 | orchestrator | 2025-05-19 19:53:59.053370 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-19 19:53:59.053375 | orchestrator | 2025-05-19 19:53:59.053380 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 19:53:59.053385 | orchestrator | Monday 19 May 2025 19:51:47 +0000 (0:00:00.438) 0:00:01.145 ************ 2025-05-19 19:53:59.053391 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:53:59.053397 | orchestrator | 2025-05-19 19:53:59.053402 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-19 19:53:59.053407 | orchestrator | Monday 19 May 2025 19:51:47 +0000 (0:00:00.728) 0:00:01.873 ************ 2025-05-19 19:53:59.053412 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-19 19:53:59.053417 | orchestrator | 2025-05-19 19:53:59.053423 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-19 19:53:59.053428 | orchestrator | Monday 19 May 2025 19:51:51 +0000 (0:00:03.615) 0:00:05.488 ************ 2025-05-19 19:53:59.053433 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-19 19:53:59.053438 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-19 19:53:59.053443 | orchestrator | 2025-05-19 19:53:59.053452 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-19 19:53:59.053460 | orchestrator | Monday 19 May 2025 19:51:58 +0000 (0:00:06.680) 0:00:12.168 ************ 2025-05-19 19:53:59.053468 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 19:53:59.053476 | orchestrator | 2025-05-19 19:53:59.053484 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-19 19:53:59.053491 | orchestrator | Monday 19 May 2025 19:52:01 +0000 (0:00:03.722) 0:00:15.891 ************ 2025-05-19 19:53:59.053500 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:53:59.053508 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-19 19:53:59.053515 | orchestrator | 2025-05-19 19:53:59.053529 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-19 19:53:59.053537 | orchestrator | Monday 19 May 2025 19:52:05 +0000 (0:00:04.081) 0:00:19.972 ************ 2025-05-19 19:53:59.053546 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:53:59.053553 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-19 19:53:59.053558 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-19 19:53:59.053563 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-19 19:53:59.053568 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-19 19:53:59.053573 | orchestrator | 2025-05-19 19:53:59.053578 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-19 19:53:59.053583 | orchestrator | Monday 19 May 2025 19:52:20 +0000 (0:00:14.906) 0:00:34.879 ************ 2025-05-19 19:53:59.053604 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-19 19:53:59.053609 | orchestrator | 2025-05-19 19:53:59.053614 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-19 19:53:59.053619 | orchestrator | Monday 19 May 2025 19:52:25 +0000 (0:00:05.162) 0:00:40.041 ************ 2025-05-19 19:53:59.053635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.053651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.053657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.053667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053717 | orchestrator | 2025-05-19 19:53:59.053723 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-19 19:53:59.053729 | orchestrator | Monday 19 May 2025 19:52:28 +0000 (0:00:02.907) 0:00:42.949 ************ 2025-05-19 19:53:59.053734 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-19 19:53:59.053740 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-19 19:53:59.053746 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-19 19:53:59.053751 | orchestrator | 2025-05-19 19:53:59.053760 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-19 19:53:59.053766 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:03.256) 0:00:46.205 ************ 2025-05-19 19:53:59.053772 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.053778 | orchestrator | 2025-05-19 19:53:59.053783 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-19 19:53:59.053789 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:00.101) 0:00:46.306 ************ 2025-05-19 19:53:59.053798 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.053804 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:53:59.053810 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:53:59.053816 | orchestrator | 2025-05-19 19:53:59.053821 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 19:53:59.053827 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:00.401) 0:00:46.708 ************ 2025-05-19 19:53:59.053833 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:53:59.053839 | orchestrator | 2025-05-19 19:53:59.053845 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-19 19:53:59.053851 | orchestrator | Monday 19 May 2025 19:52:33 +0000 (0:00:01.189) 0:00:47.897 ************ 2025-05-19 19:53:59.053863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.053870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.053876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.053886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.053933 | orchestrator | 2025-05-19 19:53:59.053939 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-19 19:53:59.053950 | orchestrator | Monday 19 May 2025 19:52:38 +0000 (0:00:04.569) 0:00:52.467 ************ 2025-05-19 19:53:59.053959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.053970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.053977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.053983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.053990 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.053996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054054 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:53:59.054067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054084 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:53:59.054089 | orchestrator | 2025-05-19 19:53:59.054094 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-19 19:53:59.054099 | orchestrator | Monday 19 May 2025 19:52:39 +0000 (0:00:01.398) 0:00:53.865 ************ 2025-05-19 19:53:59.054105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054131 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.054137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054156 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:53:59.054164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054185 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:53:59.054190 | orchestrator | 2025-05-19 19:53:59.054195 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-19 19:53:59.054200 | orchestrator | Monday 19 May 2025 19:52:43 +0000 (0:00:03.955) 0:00:57.821 ************ 2025-05-19 19:53:59.054205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054273 | orchestrator | 2025-05-19 19:53:59.054278 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-19 19:53:59.054283 | orchestrator | Monday 19 May 2025 19:52:48 +0000 (0:00:05.100) 0:01:02.922 ************ 2025-05-19 19:53:59.054288 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:53:59.054293 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054298 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:53:59.054303 | orchestrator | 2025-05-19 19:53:59.054308 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-19 19:53:59.054313 | orchestrator | Monday 19 May 2025 19:52:51 +0000 (0:00:02.470) 0:01:05.393 ************ 2025-05-19 19:53:59.054321 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:53:59.054326 | orchestrator | 2025-05-19 19:53:59.054331 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-19 19:53:59.054336 | orchestrator | Monday 19 May 2025 19:52:53 +0000 (0:00:02.174) 0:01:07.568 ************ 2025-05-19 19:53:59.054341 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.054346 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:53:59.054351 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:53:59.054356 | orchestrator | 2025-05-19 19:53:59.054361 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-19 19:53:59.054366 | orchestrator | Monday 19 May 2025 19:52:54 +0000 (0:00:00.857) 0:01:08.425 ************ 2025-05-19 19:53:59.054371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054438 | orchestrator | 2025-05-19 19:53:59.054443 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-19 19:53:59.054451 | orchestrator | Monday 19 May 2025 19:53:05 +0000 (0:00:11.357) 0:01:19.782 ************ 2025-05-19 19:53:59.054466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054498 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:53:59.054507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054540 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.054554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 19:53:59.054566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:53:59.054577 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:53:59.054582 | orchestrator | 2025-05-19 19:53:59.054612 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-19 19:53:59.054617 | orchestrator | Monday 19 May 2025 19:53:06 +0000 (0:00:00.827) 0:01:20.610 ************ 2025-05-19 19:53:59.054626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 19:53:59.054651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:53:59.054693 | orchestrator | 2025-05-19 19:53:59.054698 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 19:53:59.054704 | orchestrator | Monday 19 May 2025 19:53:10 +0000 (0:00:03.861) 0:01:24.472 ************ 2025-05-19 19:53:59.054709 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:53:59.054714 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:53:59.054719 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:53:59.054724 | orchestrator | 2025-05-19 19:53:59.054729 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-19 19:53:59.054734 | orchestrator | Monday 19 May 2025 19:53:11 +0000 (0:00:00.889) 0:01:25.361 ************ 2025-05-19 19:53:59.054739 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054744 | orchestrator | 2025-05-19 19:53:59.054748 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-19 19:53:59.054753 | orchestrator | Monday 19 May 2025 19:53:14 +0000 (0:00:03.192) 0:01:28.554 ************ 2025-05-19 19:53:59.054758 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054764 | orchestrator | 2025-05-19 19:53:59.054769 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-19 19:53:59.054774 | orchestrator | Monday 19 May 2025 19:53:17 +0000 (0:00:02.712) 0:01:31.267 ************ 2025-05-19 19:53:59.054779 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054784 | orchestrator | 2025-05-19 19:53:59.054789 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 19:53:59.054794 | orchestrator | Monday 19 May 2025 19:53:29 +0000 (0:00:12.574) 0:01:43.841 ************ 2025-05-19 19:53:59.054799 | orchestrator | 2025-05-19 19:53:59.054804 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 19:53:59.054809 | orchestrator | Monday 19 May 2025 19:53:30 +0000 (0:00:00.294) 0:01:44.136 ************ 2025-05-19 19:53:59.054814 | orchestrator | 2025-05-19 19:53:59.054819 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 19:53:59.054824 | orchestrator | Monday 19 May 2025 19:53:31 +0000 (0:00:01.019) 0:01:45.156 ************ 2025-05-19 19:53:59.054829 | orchestrator | 2025-05-19 19:53:59.054834 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-19 19:53:59.054839 | orchestrator | Monday 19 May 2025 19:53:31 +0000 (0:00:00.118) 0:01:45.274 ************ 2025-05-19 19:53:59.054844 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054849 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:53:59.054854 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:53:59.054859 | orchestrator | 2025-05-19 19:53:59.054867 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-19 19:53:59.054875 | orchestrator | Monday 19 May 2025 19:53:39 +0000 (0:00:07.795) 0:01:53.069 ************ 2025-05-19 19:53:59.054880 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054885 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:53:59.054890 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:53:59.054895 | orchestrator | 2025-05-19 19:53:59.054900 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-19 19:53:59.054905 | orchestrator | Monday 19 May 2025 19:53:50 +0000 (0:00:11.688) 0:02:04.758 ************ 2025-05-19 19:53:59.054910 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:53:59.054915 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:53:59.054920 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:53:59.054925 | orchestrator | 2025-05-19 19:53:59.054930 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:53:59.054936 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:53:59.054941 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:53:59.054946 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:53:59.054951 | orchestrator | 2025-05-19 19:53:59.054956 | orchestrator | 2025-05-19 19:53:59.054961 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:53:59.054969 | orchestrator | Monday 19 May 2025 19:53:58 +0000 (0:00:07.331) 0:02:12.090 ************ 2025-05-19 19:53:59.054974 | orchestrator | =============================================================================== 2025-05-19 19:53:59.054979 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.91s 2025-05-19 19:53:59.054985 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.57s 2025-05-19 19:53:59.054989 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.69s 2025-05-19 19:53:59.054994 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.36s 2025-05-19 19:53:59.054999 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.80s 2025-05-19 19:53:59.055004 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.33s 2025-05-19 19:53:59.055009 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.68s 2025-05-19 19:53:59.055014 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.16s 2025-05-19 19:53:59.055019 | orchestrator | barbican : Copying over config.json files for services ------------------ 5.10s 2025-05-19 19:53:59.055024 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.57s 2025-05-19 19:53:59.055029 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.08s 2025-05-19 19:53:59.055034 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 3.96s 2025-05-19 19:53:59.055039 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.86s 2025-05-19 19:53:59.055044 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.72s 2025-05-19 19:53:59.055049 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.62s 2025-05-19 19:53:59.055054 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 3.26s 2025-05-19 19:53:59.055059 | orchestrator | barbican : Creating barbican database ----------------------------------- 3.19s 2025-05-19 19:53:59.055064 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.91s 2025-05-19 19:53:59.055069 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.71s 2025-05-19 19:53:59.055074 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.47s 2025-05-19 19:54:02.091185 | orchestrator | 2025-05-19 19:54:02 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:02.091351 | orchestrator | 2025-05-19 19:54:02 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:02.092006 | orchestrator | 2025-05-19 19:54:02 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:54:02.092809 | orchestrator | 2025-05-19 19:54:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:02.093223 | orchestrator | 2025-05-19 19:54:02 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:02.093352 | orchestrator | 2025-05-19 19:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:05.130281 | orchestrator | 2025-05-19 19:54:05 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:05.130558 | orchestrator | 2025-05-19 19:54:05 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:05.130667 | orchestrator | 2025-05-19 19:54:05 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:54:05.131111 | orchestrator | 2025-05-19 19:54:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:05.131921 | orchestrator | 2025-05-19 19:54:05 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:05.131946 | orchestrator | 2025-05-19 19:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:08.164129 | orchestrator | 2025-05-19 19:54:08 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:08.164230 | orchestrator | 2025-05-19 19:54:08 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:08.164623 | orchestrator | 2025-05-19 19:54:08 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state STARTED 2025-05-19 19:54:08.165189 | orchestrator | 2025-05-19 19:54:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:08.165688 | orchestrator | 2025-05-19 19:54:08 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:08.165745 | orchestrator | 2025-05-19 19:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:11.193117 | orchestrator | 2025-05-19 19:54:11 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:11.193241 | orchestrator | 2025-05-19 19:54:11 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:11.193744 | orchestrator | 2025-05-19 19:54:11 | INFO  | Task 952943b5-586e-48a0-a182-892b3390a86f is in state SUCCESS 2025-05-19 19:54:11.193777 | orchestrator | 2025-05-19 19:54:11.195325 | orchestrator | 2025-05-19 19:54:11.195371 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:54:11.195385 | orchestrator | 2025-05-19 19:54:11.195397 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:54:11.195409 | orchestrator | Monday 19 May 2025 19:52:44 +0000 (0:00:00.740) 0:00:00.740 ************ 2025-05-19 19:54:11.195420 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:54:11.195433 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:54:11.195444 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:54:11.195454 | orchestrator | 2025-05-19 19:54:11.195465 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:54:11.195476 | orchestrator | Monday 19 May 2025 19:52:45 +0000 (0:00:00.950) 0:00:01.690 ************ 2025-05-19 19:54:11.195487 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-19 19:54:11.195498 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-19 19:54:11.195509 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-19 19:54:11.195652 | orchestrator | 2025-05-19 19:54:11.195667 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-19 19:54:11.195678 | orchestrator | 2025-05-19 19:54:11.195689 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 19:54:11.195701 | orchestrator | Monday 19 May 2025 19:52:46 +0000 (0:00:00.440) 0:00:02.131 ************ 2025-05-19 19:54:11.195713 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:54:11.195815 | orchestrator | 2025-05-19 19:54:11.195830 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-19 19:54:11.195841 | orchestrator | Monday 19 May 2025 19:52:47 +0000 (0:00:01.231) 0:00:03.362 ************ 2025-05-19 19:54:11.195853 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-19 19:54:11.195864 | orchestrator | 2025-05-19 19:54:11.195875 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-19 19:54:11.195886 | orchestrator | Monday 19 May 2025 19:52:50 +0000 (0:00:03.114) 0:00:06.477 ************ 2025-05-19 19:54:11.195897 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-19 19:54:11.195909 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-19 19:54:11.195921 | orchestrator | 2025-05-19 19:54:11.195932 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-19 19:54:11.195943 | orchestrator | Monday 19 May 2025 19:52:57 +0000 (0:00:06.806) 0:00:13.283 ************ 2025-05-19 19:54:11.195954 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 19:54:11.195965 | orchestrator | 2025-05-19 19:54:11.195976 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-19 19:54:11.195987 | orchestrator | Monday 19 May 2025 19:53:01 +0000 (0:00:03.838) 0:00:17.122 ************ 2025-05-19 19:54:11.195999 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:54:11.196011 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-19 19:54:11.196021 | orchestrator | 2025-05-19 19:54:11.196032 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-19 19:54:11.196044 | orchestrator | Monday 19 May 2025 19:53:05 +0000 (0:00:03.989) 0:00:21.112 ************ 2025-05-19 19:54:11.196055 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:54:11.196066 | orchestrator | 2025-05-19 19:54:11.196078 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-19 19:54:11.196089 | orchestrator | Monday 19 May 2025 19:53:08 +0000 (0:00:03.319) 0:00:24.431 ************ 2025-05-19 19:54:11.196100 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-19 19:54:11.196110 | orchestrator | 2025-05-19 19:54:11.196122 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 19:54:11.196132 | orchestrator | Monday 19 May 2025 19:53:13 +0000 (0:00:05.092) 0:00:29.523 ************ 2025-05-19 19:54:11.196142 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:11.196174 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:11.196186 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:11.196198 | orchestrator | 2025-05-19 19:54:11.196210 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-19 19:54:11.196222 | orchestrator | Monday 19 May 2025 19:53:14 +0000 (0:00:00.908) 0:00:30.432 ************ 2025-05-19 19:54:11.196239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196315 | orchestrator | 2025-05-19 19:54:11.196326 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-19 19:54:11.196336 | orchestrator | Monday 19 May 2025 19:53:15 +0000 (0:00:01.234) 0:00:31.667 ************ 2025-05-19 19:54:11.196347 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:11.196359 | orchestrator | 2025-05-19 19:54:11.196370 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-19 19:54:11.196381 | orchestrator | Monday 19 May 2025 19:53:15 +0000 (0:00:00.276) 0:00:31.943 ************ 2025-05-19 19:54:11.196392 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:11.196404 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:11.196415 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:11.196426 | orchestrator | 2025-05-19 19:54:11.196437 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 19:54:11.196449 | orchestrator | Monday 19 May 2025 19:53:16 +0000 (0:00:00.622) 0:00:32.565 ************ 2025-05-19 19:54:11.196461 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:54:11.196474 | orchestrator | 2025-05-19 19:54:11.196486 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-19 19:54:11.196497 | orchestrator | Monday 19 May 2025 19:53:18 +0000 (0:00:02.111) 0:00:34.677 ************ 2025-05-19 19:54:11.196515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196598 | orchestrator | 2025-05-19 19:54:11.196610 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-19 19:54:11.196622 | orchestrator | Monday 19 May 2025 19:53:21 +0000 (0:00:02.648) 0:00:37.325 ************ 2025-05-19 19:54:11.196635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.196647 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:11.196666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.196685 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:11.196705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.196717 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:11.196729 | orchestrator | 2025-05-19 19:54:11.196741 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-19 19:54:11.196753 | orchestrator | Monday 19 May 2025 19:53:22 +0000 (0:00:01.092) 0:00:38.418 ************ 2025-05-19 19:54:11.196765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.196779 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:11.196792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.196804 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:11.196823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.196843 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:11.196855 | orchestrator | 2025-05-19 19:54:11.196866 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-19 19:54:11.196877 | orchestrator | Monday 19 May 2025 19:53:24 +0000 (0:00:02.547) 0:00:40.965 ************ 2025-05-19 19:54:11.196898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196933 | orchestrator | 2025-05-19 19:54:11.196944 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-19 19:54:11.196963 | orchestrator | Monday 19 May 2025 19:53:26 +0000 (0:00:01.917) 0:00:42.882 ************ 2025-05-19 19:54:11.196980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.196993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.197013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.197025 | orchestrator | 2025-05-19 19:54:11.197036 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-19 19:54:11.197047 | orchestrator | Monday 19 May 2025 19:53:31 +0000 (0:00:04.730) 0:00:47.613 ************ 2025-05-19 19:54:11.197059 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 19:54:11.197071 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 19:54:11.197082 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 19:54:11.197092 | orchestrator | 2025-05-19 19:54:11.197103 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-19 19:54:11.197114 | orchestrator | Monday 19 May 2025 19:53:34 +0000 (0:00:02.948) 0:00:50.562 ************ 2025-05-19 19:54:11.197126 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:11.197138 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:11.197149 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:11.197168 | orchestrator | 2025-05-19 19:54:11.197180 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-19 19:54:11.197192 | orchestrator | Monday 19 May 2025 19:53:36 +0000 (0:00:01.875) 0:00:52.437 ************ 2025-05-19 19:54:11.197203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.197215 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:11.197232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.197244 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:11.197265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 19:54:11.197275 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:11.197286 | orchestrator | 2025-05-19 19:54:11.197297 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-19 19:54:11.197308 | orchestrator | Monday 19 May 2025 19:53:37 +0000 (0:00:01.474) 0:00:53.912 ************ 2025-05-19 19:54:11.197318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.197338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.197363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 19:54:11.197375 | orchestrator | 2025-05-19 19:54:11.197386 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-19 19:54:11.197397 | orchestrator | Monday 19 May 2025 19:53:39 +0000 (0:00:01.740) 0:00:55.653 ************ 2025-05-19 19:54:11.197408 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:11.197418 | orchestrator | 2025-05-19 19:54:11.197429 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-19 19:54:11.197440 | orchestrator | Monday 19 May 2025 19:53:42 +0000 (0:00:02.881) 0:00:58.535 ************ 2025-05-19 19:54:11.197451 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:11.197462 | orchestrator | 2025-05-19 19:54:11.197474 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-19 19:54:11.197485 | orchestrator | Monday 19 May 2025 19:53:45 +0000 (0:00:02.507) 0:01:01.042 ************ 2025-05-19 19:54:11.197502 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:11.197513 | orchestrator | 2025-05-19 19:54:11.197524 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 19:54:11.197535 | orchestrator | Monday 19 May 2025 19:53:58 +0000 (0:00:13.645) 0:01:14.688 ************ 2025-05-19 19:54:11.197545 | orchestrator | 2025-05-19 19:54:11.197556 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 19:54:11.197608 | orchestrator | Monday 19 May 2025 19:53:58 +0000 (0:00:00.110) 0:01:14.799 ************ 2025-05-19 19:54:11.197620 | orchestrator | 2025-05-19 19:54:11.197631 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 19:54:11.197641 | orchestrator | Monday 19 May 2025 19:53:58 +0000 (0:00:00.209) 0:01:15.008 ************ 2025-05-19 19:54:11.197653 | orchestrator | 2025-05-19 19:54:11.197664 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-19 19:54:11.197686 | orchestrator | Monday 19 May 2025 19:53:59 +0000 (0:00:00.072) 0:01:15.081 ************ 2025-05-19 19:54:11.197697 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:11.197708 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:11.197720 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:11.197731 | orchestrator | 2025-05-19 19:54:11.197742 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:54:11.197755 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:54:11.197767 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:54:11.197778 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 19:54:11.197788 | orchestrator | 2025-05-19 19:54:11.197798 | orchestrator | 2025-05-19 19:54:11.197809 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:54:11.197821 | orchestrator | Monday 19 May 2025 19:54:09 +0000 (0:00:10.791) 0:01:25.872 ************ 2025-05-19 19:54:11.197833 | orchestrator | =============================================================================== 2025-05-19 19:54:11.197844 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.65s 2025-05-19 19:54:11.197855 | orchestrator | placement : Restart placement-api container ---------------------------- 10.79s 2025-05-19 19:54:11.197865 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.81s 2025-05-19 19:54:11.197875 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.09s 2025-05-19 19:54:11.197887 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.73s 2025-05-19 19:54:11.197897 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.99s 2025-05-19 19:54:11.197908 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.84s 2025-05-19 19:54:11.197920 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.32s 2025-05-19 19:54:11.197932 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.11s 2025-05-19 19:54:11.197942 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.95s 2025-05-19 19:54:11.197953 | orchestrator | placement : Creating placement databases -------------------------------- 2.88s 2025-05-19 19:54:11.197965 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.65s 2025-05-19 19:54:11.197975 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.55s 2025-05-19 19:54:11.197985 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.51s 2025-05-19 19:54:11.197996 | orchestrator | placement : include_tasks ----------------------------------------------- 2.11s 2025-05-19 19:54:11.198083 | orchestrator | placement : Copying over config.json files for services ----------------- 1.92s 2025-05-19 19:54:11.198100 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.88s 2025-05-19 19:54:11.198112 | orchestrator | placement : Check placement containers ---------------------------------- 1.74s 2025-05-19 19:54:11.198124 | orchestrator | placement : Copying over existing policy file --------------------------- 1.47s 2025-05-19 19:54:11.198134 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.23s 2025-05-19 19:54:11.198146 | orchestrator | 2025-05-19 19:54:11 | INFO  | Task 82ade2f9-6b8e-4fcc-af3e-7f45ca823302 is in state STARTED 2025-05-19 19:54:11.198158 | orchestrator | 2025-05-19 19:54:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:11.198169 | orchestrator | 2025-05-19 19:54:11 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:11.198179 | orchestrator | 2025-05-19 19:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:14.240604 | orchestrator | 2025-05-19 19:54:14 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:14.240792 | orchestrator | 2025-05-19 19:54:14 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:14.241703 | orchestrator | 2025-05-19 19:54:14 | INFO  | Task 82ade2f9-6b8e-4fcc-af3e-7f45ca823302 is in state STARTED 2025-05-19 19:54:14.242308 | orchestrator | 2025-05-19 19:54:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:14.243118 | orchestrator | 2025-05-19 19:54:14 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:14.243131 | orchestrator | 2025-05-19 19:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:17.278451 | orchestrator | 2025-05-19 19:54:17 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:17.278588 | orchestrator | 2025-05-19 19:54:17 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:17.278838 | orchestrator | 2025-05-19 19:54:17 | INFO  | Task 82ade2f9-6b8e-4fcc-af3e-7f45ca823302 is in state SUCCESS 2025-05-19 19:54:17.281028 | orchestrator | 2025-05-19 19:54:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:17.281414 | orchestrator | 2025-05-19 19:54:17 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:17.282065 | orchestrator | 2025-05-19 19:54:17 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:17.282094 | orchestrator | 2025-05-19 19:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:20.311249 | orchestrator | 2025-05-19 19:54:20 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:20.318314 | orchestrator | 2025-05-19 19:54:20 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:20.318400 | orchestrator | 2025-05-19 19:54:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:20.318410 | orchestrator | 2025-05-19 19:54:20 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:20.318417 | orchestrator | 2025-05-19 19:54:20 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:20.318426 | orchestrator | 2025-05-19 19:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:23.345848 | orchestrator | 2025-05-19 19:54:23 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:23.345962 | orchestrator | 2025-05-19 19:54:23 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:23.347125 | orchestrator | 2025-05-19 19:54:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:23.347910 | orchestrator | 2025-05-19 19:54:23 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:23.349619 | orchestrator | 2025-05-19 19:54:23 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:23.349684 | orchestrator | 2025-05-19 19:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:26.381950 | orchestrator | 2025-05-19 19:54:26 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:26.386498 | orchestrator | 2025-05-19 19:54:26 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:26.387882 | orchestrator | 2025-05-19 19:54:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:26.388005 | orchestrator | 2025-05-19 19:54:26 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:26.388028 | orchestrator | 2025-05-19 19:54:26 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:26.388044 | orchestrator | 2025-05-19 19:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:29.422105 | orchestrator | 2025-05-19 19:54:29 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:29.425213 | orchestrator | 2025-05-19 19:54:29 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:29.425308 | orchestrator | 2025-05-19 19:54:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:29.425941 | orchestrator | 2025-05-19 19:54:29 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:29.426259 | orchestrator | 2025-05-19 19:54:29 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:29.426301 | orchestrator | 2025-05-19 19:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:32.453284 | orchestrator | 2025-05-19 19:54:32 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:32.454399 | orchestrator | 2025-05-19 19:54:32 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:32.457086 | orchestrator | 2025-05-19 19:54:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:32.457736 | orchestrator | 2025-05-19 19:54:32 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:32.458198 | orchestrator | 2025-05-19 19:54:32 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:32.458237 | orchestrator | 2025-05-19 19:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:35.489947 | orchestrator | 2025-05-19 19:54:35 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:35.490087 | orchestrator | 2025-05-19 19:54:35 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:35.493064 | orchestrator | 2025-05-19 19:54:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:35.493199 | orchestrator | 2025-05-19 19:54:35 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:35.493217 | orchestrator | 2025-05-19 19:54:35 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:35.493223 | orchestrator | 2025-05-19 19:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:38.527311 | orchestrator | 2025-05-19 19:54:38 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:38.527391 | orchestrator | 2025-05-19 19:54:38 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:38.528338 | orchestrator | 2025-05-19 19:54:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:38.528810 | orchestrator | 2025-05-19 19:54:38 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:38.529288 | orchestrator | 2025-05-19 19:54:38 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:38.529340 | orchestrator | 2025-05-19 19:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:41.557933 | orchestrator | 2025-05-19 19:54:41 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:41.558574 | orchestrator | 2025-05-19 19:54:41 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:41.558884 | orchestrator | 2025-05-19 19:54:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:41.559707 | orchestrator | 2025-05-19 19:54:41 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:41.560872 | orchestrator | 2025-05-19 19:54:41 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:41.560935 | orchestrator | 2025-05-19 19:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:44.589877 | orchestrator | 2025-05-19 19:54:44 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:44.589978 | orchestrator | 2025-05-19 19:54:44 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:44.590009 | orchestrator | 2025-05-19 19:54:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:44.590130 | orchestrator | 2025-05-19 19:54:44 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:44.590140 | orchestrator | 2025-05-19 19:54:44 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:44.590147 | orchestrator | 2025-05-19 19:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:47.619044 | orchestrator | 2025-05-19 19:54:47 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state STARTED 2025-05-19 19:54:47.619290 | orchestrator | 2025-05-19 19:54:47 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:47.619962 | orchestrator | 2025-05-19 19:54:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:47.622329 | orchestrator | 2025-05-19 19:54:47 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:47.623024 | orchestrator | 2025-05-19 19:54:47 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:47.623106 | orchestrator | 2025-05-19 19:54:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:50.657167 | orchestrator | 2025-05-19 19:54:50 | INFO  | Task e04ce7ad-0c28-41fe-8955-d00adc6e680f is in state SUCCESS 2025-05-19 19:54:50.659622 | orchestrator | 2025-05-19 19:54:50.660017 | orchestrator | 2025-05-19 19:54:50.660037 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:54:50.660049 | orchestrator | 2025-05-19 19:54:50.660060 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:54:50.660072 | orchestrator | Monday 19 May 2025 19:54:13 +0000 (0:00:00.393) 0:00:00.393 ************ 2025-05-19 19:54:50.660084 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:54:50.660097 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:54:50.660107 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:54:50.660118 | orchestrator | 2025-05-19 19:54:50.660129 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:54:50.660140 | orchestrator | Monday 19 May 2025 19:54:13 +0000 (0:00:00.706) 0:00:01.099 ************ 2025-05-19 19:54:50.660152 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-19 19:54:50.660163 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-19 19:54:50.660174 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-19 19:54:50.660185 | orchestrator | 2025-05-19 19:54:50.660196 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-19 19:54:50.660207 | orchestrator | 2025-05-19 19:54:50.660218 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-19 19:54:50.660229 | orchestrator | Monday 19 May 2025 19:54:14 +0000 (0:00:00.500) 0:00:01.600 ************ 2025-05-19 19:54:50.660267 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:54:50.660278 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:54:50.660290 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:54:50.660301 | orchestrator | 2025-05-19 19:54:50.660311 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:54:50.660323 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:54:50.660337 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:54:50.660348 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:54:50.660359 | orchestrator | 2025-05-19 19:54:50.660370 | orchestrator | 2025-05-19 19:54:50.660381 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:54:50.660392 | orchestrator | Monday 19 May 2025 19:54:15 +0000 (0:00:00.989) 0:00:02.589 ************ 2025-05-19 19:54:50.660402 | orchestrator | =============================================================================== 2025-05-19 19:54:50.660413 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.99s 2025-05-19 19:54:50.660424 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-05-19 19:54:50.660963 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-05-19 19:54:50.660985 | orchestrator | 2025-05-19 19:54:50.661003 | orchestrator | 2025-05-19 19:54:50.661014 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:54:50.661025 | orchestrator | 2025-05-19 19:54:50.661036 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:54:50.661047 | orchestrator | Monday 19 May 2025 19:51:45 +0000 (0:00:00.468) 0:00:00.468 ************ 2025-05-19 19:54:50.661058 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:54:50.661069 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:54:50.661080 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:54:50.661090 | orchestrator | 2025-05-19 19:54:50.661101 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:54:50.661112 | orchestrator | Monday 19 May 2025 19:51:46 +0000 (0:00:00.532) 0:00:01.000 ************ 2025-05-19 19:54:50.661124 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-19 19:54:50.661138 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-19 19:54:50.661157 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-19 19:54:50.661722 | orchestrator | 2025-05-19 19:54:50.661753 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-19 19:54:50.661771 | orchestrator | 2025-05-19 19:54:50.661792 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 19:54:50.661833 | orchestrator | Monday 19 May 2025 19:51:46 +0000 (0:00:00.416) 0:00:01.416 ************ 2025-05-19 19:54:50.661853 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:54:50.661865 | orchestrator | 2025-05-19 19:54:50.661876 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-19 19:54:50.661890 | orchestrator | Monday 19 May 2025 19:51:47 +0000 (0:00:00.705) 0:00:02.121 ************ 2025-05-19 19:54:50.661908 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-19 19:54:50.661925 | orchestrator | 2025-05-19 19:54:50.661943 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-19 19:54:50.661960 | orchestrator | Monday 19 May 2025 19:51:51 +0000 (0:00:03.710) 0:00:05.832 ************ 2025-05-19 19:54:50.661979 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-19 19:54:50.662000 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-19 19:54:50.662136 | orchestrator | 2025-05-19 19:54:50.662153 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-19 19:54:50.662164 | orchestrator | Monday 19 May 2025 19:51:57 +0000 (0:00:06.718) 0:00:12.550 ************ 2025-05-19 19:54:50.662175 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-19 19:54:50.662186 | orchestrator | 2025-05-19 19:54:50.662197 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-19 19:54:50.662208 | orchestrator | Monday 19 May 2025 19:52:01 +0000 (0:00:03.731) 0:00:16.282 ************ 2025-05-19 19:54:50.662343 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:54:50.662360 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-19 19:54:50.662371 | orchestrator | 2025-05-19 19:54:50.662383 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-19 19:54:50.662393 | orchestrator | Monday 19 May 2025 19:52:05 +0000 (0:00:03.960) 0:00:20.242 ************ 2025-05-19 19:54:50.662404 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:54:50.662415 | orchestrator | 2025-05-19 19:54:50.662426 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-19 19:54:50.662437 | orchestrator | Monday 19 May 2025 19:52:08 +0000 (0:00:03.295) 0:00:23.538 ************ 2025-05-19 19:54:50.662447 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-19 19:54:50.662458 | orchestrator | 2025-05-19 19:54:50.662469 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-19 19:54:50.662479 | orchestrator | Monday 19 May 2025 19:52:13 +0000 (0:00:04.327) 0:00:27.865 ************ 2025-05-19 19:54:50.662604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.662635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.662669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.662707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.662998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663182 | orchestrator | 2025-05-19 19:54:50.663193 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-19 19:54:50.663203 | orchestrator | Monday 19 May 2025 19:52:16 +0000 (0:00:02.919) 0:00:30.785 ************ 2025-05-19 19:54:50.663213 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.663223 | orchestrator | 2025-05-19 19:54:50.663233 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-19 19:54:50.663251 | orchestrator | Monday 19 May 2025 19:52:16 +0000 (0:00:00.135) 0:00:30.921 ************ 2025-05-19 19:54:50.663260 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.663270 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:50.663279 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:50.663289 | orchestrator | 2025-05-19 19:54:50.663299 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 19:54:50.663309 | orchestrator | Monday 19 May 2025 19:52:16 +0000 (0:00:00.452) 0:00:31.373 ************ 2025-05-19 19:54:50.663319 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:54:50.663328 | orchestrator | 2025-05-19 19:54:50.663344 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-19 19:54:50.663354 | orchestrator | Monday 19 May 2025 19:52:17 +0000 (0:00:00.709) 0:00:32.083 ************ 2025-05-19 19:54:50.663364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.663402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.663415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.663425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.663819 | orchestrator | 2025-05-19 19:54:50.663829 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-19 19:54:50.663839 | orchestrator | Monday 19 May 2025 19:52:23 +0000 (0:00:06.509) 0:00:38.593 ************ 2025-05-19 19:54:50.663855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.663895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.663908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.663955 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.663966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.664013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.664033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664138 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:50.664151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.664190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.664206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664275 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:50.664288 | orchestrator | 2025-05-19 19:54:50.664303 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-19 19:54:50.664316 | orchestrator | Monday 19 May 2025 19:52:26 +0000 (0:00:02.789) 0:00:41.382 ************ 2025-05-19 19:54:50.664337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.664389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.664401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664441 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.664453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.664462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.664518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664559 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:50.664572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.664580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.664613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664652 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:50.664660 | orchestrator | 2025-05-19 19:54:50.664668 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-19 19:54:50.664676 | orchestrator | Monday 19 May 2025 19:52:28 +0000 (0:00:01.842) 0:00:43.225 ************ 2025-05-19 19:54:50.664689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.664720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.664740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.664748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.664990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.664998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665043 | orchestrator | 2025-05-19 19:54:50.665051 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-19 19:54:50.665059 | orchestrator | Monday 19 May 2025 19:52:37 +0000 (0:00:08.736) 0:00:51.962 ************ 2025-05-19 19:54:50.665100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.665116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.665130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.665150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665479 | orchestrator | 2025-05-19 19:54:50.665516 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-19 19:54:50.665529 | orchestrator | Monday 19 May 2025 19:53:02 +0000 (0:00:25.111) 0:01:17.073 ************ 2025-05-19 19:54:50.665543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 19:54:50.665556 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 19:54:50.665569 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 19:54:50.665582 | orchestrator | 2025-05-19 19:54:50.665594 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-19 19:54:50.665712 | orchestrator | Monday 19 May 2025 19:53:09 +0000 (0:00:07.211) 0:01:24.285 ************ 2025-05-19 19:54:50.665721 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 19:54:50.665729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 19:54:50.665737 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 19:54:50.665745 | orchestrator | 2025-05-19 19:54:50.665753 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-19 19:54:50.665760 | orchestrator | Monday 19 May 2025 19:53:15 +0000 (0:00:05.839) 0:01:30.124 ************ 2025-05-19 19:54:50.665783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.665802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.665811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.665820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.665983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.665995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666011 | orchestrator | 2025-05-19 19:54:50.666072 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-19 19:54:50.666081 | orchestrator | Monday 19 May 2025 19:53:19 +0000 (0:00:04.229) 0:01:34.354 ************ 2025-05-19 19:54:50.666095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.666104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.666171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.666185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666458 | orchestrator | 2025-05-19 19:54:50.666471 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 19:54:50.666485 | orchestrator | Monday 19 May 2025 19:53:23 +0000 (0:00:03.639) 0:01:37.994 ************ 2025-05-19 19:54:50.666549 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.666558 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:50.666566 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:50.666574 | orchestrator | 2025-05-19 19:54:50.666582 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-19 19:54:50.666590 | orchestrator | Monday 19 May 2025 19:53:23 +0000 (0:00:00.346) 0:01:38.340 ************ 2025-05-19 19:54:50.666598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.666612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.666621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666702 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.666725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.666739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.666758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666823 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:50.666840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 19:54:50.666859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 19:54:50.666867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.666911 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:50.666918 | orchestrator | 2025-05-19 19:54:50.666924 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-19 19:54:50.666931 | orchestrator | Monday 19 May 2025 19:53:25 +0000 (0:00:02.117) 0:01:40.457 ************ 2025-05-19 19:54:50.666943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.666954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.666961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 19:54:50.666968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.666987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.667094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 19:54:50.667112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.667128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 19:54:50.667135 | orchestrator | 2025-05-19 19:54:50.667142 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 19:54:50.667149 | orchestrator | Monday 19 May 2025 19:53:30 +0000 (0:00:05.176) 0:01:45.634 ************ 2025-05-19 19:54:50.667156 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:54:50.667163 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:54:50.667169 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:54:50.667176 | orchestrator | 2025-05-19 19:54:50.667183 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-19 19:54:50.667189 | orchestrator | Monday 19 May 2025 19:53:31 +0000 (0:00:00.834) 0:01:46.468 ************ 2025-05-19 19:54:50.667197 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-19 19:54:50.667203 | orchestrator | 2025-05-19 19:54:50.667210 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-19 19:54:50.667217 | orchestrator | Monday 19 May 2025 19:53:34 +0000 (0:00:02.650) 0:01:49.119 ************ 2025-05-19 19:54:50.667223 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 19:54:50.667230 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-19 19:54:50.667237 | orchestrator | 2025-05-19 19:54:50.667243 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-19 19:54:50.667250 | orchestrator | Monday 19 May 2025 19:53:36 +0000 (0:00:02.435) 0:01:51.554 ************ 2025-05-19 19:54:50.667256 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667263 | orchestrator | 2025-05-19 19:54:50.667270 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 19:54:50.667276 | orchestrator | Monday 19 May 2025 19:53:53 +0000 (0:00:17.067) 0:02:08.621 ************ 2025-05-19 19:54:50.667283 | orchestrator | 2025-05-19 19:54:50.667289 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 19:54:50.667336 | orchestrator | Monday 19 May 2025 19:53:54 +0000 (0:00:00.142) 0:02:08.764 ************ 2025-05-19 19:54:50.667347 | orchestrator | 2025-05-19 19:54:50.667363 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 19:54:50.667378 | orchestrator | Monday 19 May 2025 19:53:54 +0000 (0:00:00.144) 0:02:08.908 ************ 2025-05-19 19:54:50.667388 | orchestrator | 2025-05-19 19:54:50.667399 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-19 19:54:50.667409 | orchestrator | Monday 19 May 2025 19:53:54 +0000 (0:00:00.093) 0:02:09.002 ************ 2025-05-19 19:54:50.667420 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667429 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:50.667439 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:50.667450 | orchestrator | 2025-05-19 19:54:50.667462 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-19 19:54:50.667485 | orchestrator | Monday 19 May 2025 19:54:02 +0000 (0:00:08.545) 0:02:17.548 ************ 2025-05-19 19:54:50.667514 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667526 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:50.667536 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:50.667547 | orchestrator | 2025-05-19 19:54:50.667558 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-19 19:54:50.667565 | orchestrator | Monday 19 May 2025 19:54:09 +0000 (0:00:07.048) 0:02:24.596 ************ 2025-05-19 19:54:50.667571 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667578 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:50.667585 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:50.667591 | orchestrator | 2025-05-19 19:54:50.667603 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-19 19:54:50.667609 | orchestrator | Monday 19 May 2025 19:54:21 +0000 (0:00:11.177) 0:02:35.774 ************ 2025-05-19 19:54:50.667616 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667622 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:50.667629 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:50.667636 | orchestrator | 2025-05-19 19:54:50.667642 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-19 19:54:50.667649 | orchestrator | Monday 19 May 2025 19:54:28 +0000 (0:00:07.710) 0:02:43.485 ************ 2025-05-19 19:54:50.667655 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:50.667662 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:50.667668 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667674 | orchestrator | 2025-05-19 19:54:50.667681 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-19 19:54:50.667688 | orchestrator | Monday 19 May 2025 19:54:35 +0000 (0:00:06.314) 0:02:49.799 ************ 2025-05-19 19:54:50.667694 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667701 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:54:50.667707 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:54:50.667714 | orchestrator | 2025-05-19 19:54:50.667720 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-19 19:54:50.667727 | orchestrator | Monday 19 May 2025 19:54:43 +0000 (0:00:08.061) 0:02:57.861 ************ 2025-05-19 19:54:50.667733 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:54:50.667740 | orchestrator | 2025-05-19 19:54:50.667746 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:54:50.667760 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:54:50.667768 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:54:50.667774 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 19:54:50.667781 | orchestrator | 2025-05-19 19:54:50.667787 | orchestrator | 2025-05-19 19:54:50.667794 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:54:50.667801 | orchestrator | Monday 19 May 2025 19:54:49 +0000 (0:00:06.041) 0:03:03.902 ************ 2025-05-19 19:54:50.667807 | orchestrator | =============================================================================== 2025-05-19 19:54:50.667813 | orchestrator | designate : Copying over designate.conf -------------------------------- 25.11s 2025-05-19 19:54:50.667820 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.07s 2025-05-19 19:54:50.667827 | orchestrator | designate : Restart designate-central container ------------------------ 11.18s 2025-05-19 19:54:50.667833 | orchestrator | designate : Copying over config.json files for services ----------------- 8.74s 2025-05-19 19:54:50.667840 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.55s 2025-05-19 19:54:50.667846 | orchestrator | designate : Restart designate-worker container -------------------------- 8.06s 2025-05-19 19:54:50.667858 | orchestrator | designate : Restart designate-producer container ------------------------ 7.71s 2025-05-19 19:54:50.667865 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.21s 2025-05-19 19:54:50.667871 | orchestrator | designate : Restart designate-api container ----------------------------- 7.05s 2025-05-19 19:54:50.667878 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.72s 2025-05-19 19:54:50.667884 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.51s 2025-05-19 19:54:50.667891 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.31s 2025-05-19 19:54:50.667897 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.04s 2025-05-19 19:54:50.667904 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.84s 2025-05-19 19:54:50.667910 | orchestrator | designate : Check designate containers ---------------------------------- 5.18s 2025-05-19 19:54:50.667917 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.33s 2025-05-19 19:54:50.667923 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.23s 2025-05-19 19:54:50.667930 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.96s 2025-05-19 19:54:50.667936 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.73s 2025-05-19 19:54:50.667943 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.71s 2025-05-19 19:54:50.667949 | orchestrator | 2025-05-19 19:54:50 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:50.667957 | orchestrator | 2025-05-19 19:54:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:50.667964 | orchestrator | 2025-05-19 19:54:50 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:50.672295 | orchestrator | 2025-05-19 19:54:50 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:50.672361 | orchestrator | 2025-05-19 19:54:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:53.700197 | orchestrator | 2025-05-19 19:54:53 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:53.700359 | orchestrator | 2025-05-19 19:54:53 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:54:53.700972 | orchestrator | 2025-05-19 19:54:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:53.701249 | orchestrator | 2025-05-19 19:54:53 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:53.701839 | orchestrator | 2025-05-19 19:54:53 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:53.701881 | orchestrator | 2025-05-19 19:54:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:56.724289 | orchestrator | 2025-05-19 19:54:56 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:56.725077 | orchestrator | 2025-05-19 19:54:56 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:54:56.725102 | orchestrator | 2025-05-19 19:54:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:56.725700 | orchestrator | 2025-05-19 19:54:56 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:56.726342 | orchestrator | 2025-05-19 19:54:56 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:56.726362 | orchestrator | 2025-05-19 19:54:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:54:59.753098 | orchestrator | 2025-05-19 19:54:59 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:54:59.753315 | orchestrator | 2025-05-19 19:54:59 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:54:59.753342 | orchestrator | 2025-05-19 19:54:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:54:59.754114 | orchestrator | 2025-05-19 19:54:59 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:54:59.755726 | orchestrator | 2025-05-19 19:54:59 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:54:59.755861 | orchestrator | 2025-05-19 19:54:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:02.791765 | orchestrator | 2025-05-19 19:55:02 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:02.791848 | orchestrator | 2025-05-19 19:55:02 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:02.792050 | orchestrator | 2025-05-19 19:55:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:02.793309 | orchestrator | 2025-05-19 19:55:02 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:02.793345 | orchestrator | 2025-05-19 19:55:02 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:02.793350 | orchestrator | 2025-05-19 19:55:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:05.828525 | orchestrator | 2025-05-19 19:55:05 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:05.828643 | orchestrator | 2025-05-19 19:55:05 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:05.828861 | orchestrator | 2025-05-19 19:55:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:05.829522 | orchestrator | 2025-05-19 19:55:05 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:05.829944 | orchestrator | 2025-05-19 19:55:05 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:05.829979 | orchestrator | 2025-05-19 19:55:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:08.861318 | orchestrator | 2025-05-19 19:55:08 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:08.861526 | orchestrator | 2025-05-19 19:55:08 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:08.861550 | orchestrator | 2025-05-19 19:55:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:08.862059 | orchestrator | 2025-05-19 19:55:08 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:08.862690 | orchestrator | 2025-05-19 19:55:08 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:08.862727 | orchestrator | 2025-05-19 19:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:11.901604 | orchestrator | 2025-05-19 19:55:11 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:11.903804 | orchestrator | 2025-05-19 19:55:11 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:11.903890 | orchestrator | 2025-05-19 19:55:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:11.904539 | orchestrator | 2025-05-19 19:55:11 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:11.906256 | orchestrator | 2025-05-19 19:55:11 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:11.906345 | orchestrator | 2025-05-19 19:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:14.936224 | orchestrator | 2025-05-19 19:55:14 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:14.936320 | orchestrator | 2025-05-19 19:55:14 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:14.936330 | orchestrator | 2025-05-19 19:55:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:14.936338 | orchestrator | 2025-05-19 19:55:14 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:14.938250 | orchestrator | 2025-05-19 19:55:14 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:14.938306 | orchestrator | 2025-05-19 19:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:17.979984 | orchestrator | 2025-05-19 19:55:17 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:17.981533 | orchestrator | 2025-05-19 19:55:17 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:17.982898 | orchestrator | 2025-05-19 19:55:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:17.985022 | orchestrator | 2025-05-19 19:55:17 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:17.988167 | orchestrator | 2025-05-19 19:55:17 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:17.988544 | orchestrator | 2025-05-19 19:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:21.039692 | orchestrator | 2025-05-19 19:55:21 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:21.040358 | orchestrator | 2025-05-19 19:55:21 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:21.043114 | orchestrator | 2025-05-19 19:55:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:21.043895 | orchestrator | 2025-05-19 19:55:21 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:21.045124 | orchestrator | 2025-05-19 19:55:21 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:21.045166 | orchestrator | 2025-05-19 19:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:24.100032 | orchestrator | 2025-05-19 19:55:24 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:24.101788 | orchestrator | 2025-05-19 19:55:24 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state STARTED 2025-05-19 19:55:24.103162 | orchestrator | 2025-05-19 19:55:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:24.105164 | orchestrator | 2025-05-19 19:55:24 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:24.106898 | orchestrator | 2025-05-19 19:55:24 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:24.106953 | orchestrator | 2025-05-19 19:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:27.168506 | orchestrator | 2025-05-19 19:55:27 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:27.168756 | orchestrator | 2025-05-19 19:55:27 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:27.169905 | orchestrator | 2025-05-19 19:55:27 | INFO  | Task 91c7744b-4b7f-44aa-9ca1-f8f0f0024911 is in state SUCCESS 2025-05-19 19:55:27.174286 | orchestrator | 2025-05-19 19:55:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:27.175808 | orchestrator | 2025-05-19 19:55:27 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:27.176865 | orchestrator | 2025-05-19 19:55:27 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:27.177160 | orchestrator | 2025-05-19 19:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:30.228080 | orchestrator | 2025-05-19 19:55:30 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:30.229665 | orchestrator | 2025-05-19 19:55:30 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:30.230160 | orchestrator | 2025-05-19 19:55:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:30.231698 | orchestrator | 2025-05-19 19:55:30 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:30.232318 | orchestrator | 2025-05-19 19:55:30 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:30.232547 | orchestrator | 2025-05-19 19:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:33.293646 | orchestrator | 2025-05-19 19:55:33 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:33.293766 | orchestrator | 2025-05-19 19:55:33 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:33.295384 | orchestrator | 2025-05-19 19:55:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:33.295478 | orchestrator | 2025-05-19 19:55:33 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:33.295649 | orchestrator | 2025-05-19 19:55:33 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:33.295670 | orchestrator | 2025-05-19 19:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:36.342371 | orchestrator | 2025-05-19 19:55:36 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:36.342663 | orchestrator | 2025-05-19 19:55:36 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:36.343124 | orchestrator | 2025-05-19 19:55:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:36.343728 | orchestrator | 2025-05-19 19:55:36 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:36.344477 | orchestrator | 2025-05-19 19:55:36 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:36.345146 | orchestrator | 2025-05-19 19:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:39.393269 | orchestrator | 2025-05-19 19:55:39 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:39.394463 | orchestrator | 2025-05-19 19:55:39 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:39.396914 | orchestrator | 2025-05-19 19:55:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:39.398766 | orchestrator | 2025-05-19 19:55:39 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:39.400348 | orchestrator | 2025-05-19 19:55:39 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:39.400429 | orchestrator | 2025-05-19 19:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:42.453881 | orchestrator | 2025-05-19 19:55:42 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:42.456965 | orchestrator | 2025-05-19 19:55:42 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:42.458482 | orchestrator | 2025-05-19 19:55:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:42.459725 | orchestrator | 2025-05-19 19:55:42 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:42.461143 | orchestrator | 2025-05-19 19:55:42 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:42.461198 | orchestrator | 2025-05-19 19:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:45.523764 | orchestrator | 2025-05-19 19:55:45 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:45.523839 | orchestrator | 2025-05-19 19:55:45 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:45.523844 | orchestrator | 2025-05-19 19:55:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:45.523862 | orchestrator | 2025-05-19 19:55:45 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:45.523866 | orchestrator | 2025-05-19 19:55:45 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:45.523871 | orchestrator | 2025-05-19 19:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:48.571118 | orchestrator | 2025-05-19 19:55:48 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:48.572083 | orchestrator | 2025-05-19 19:55:48 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:48.572620 | orchestrator | 2025-05-19 19:55:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:48.573187 | orchestrator | 2025-05-19 19:55:48 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:48.573911 | orchestrator | 2025-05-19 19:55:48 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:48.573944 | orchestrator | 2025-05-19 19:55:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:51.612135 | orchestrator | 2025-05-19 19:55:51 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:51.612247 | orchestrator | 2025-05-19 19:55:51 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:51.612785 | orchestrator | 2025-05-19 19:55:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:51.613479 | orchestrator | 2025-05-19 19:55:51 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:51.614368 | orchestrator | 2025-05-19 19:55:51 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:51.614471 | orchestrator | 2025-05-19 19:55:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:54.659195 | orchestrator | 2025-05-19 19:55:54 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:54.660942 | orchestrator | 2025-05-19 19:55:54 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:54.662453 | orchestrator | 2025-05-19 19:55:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:54.663273 | orchestrator | 2025-05-19 19:55:54 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:54.664758 | orchestrator | 2025-05-19 19:55:54 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:54.664806 | orchestrator | 2025-05-19 19:55:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:55:57.699306 | orchestrator | 2025-05-19 19:55:57 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:55:57.699525 | orchestrator | 2025-05-19 19:55:57 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:55:57.701117 | orchestrator | 2025-05-19 19:55:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:55:57.701875 | orchestrator | 2025-05-19 19:55:57 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:55:57.702676 | orchestrator | 2025-05-19 19:55:57 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:55:57.702701 | orchestrator | 2025-05-19 19:55:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:00.751481 | orchestrator | 2025-05-19 19:56:00 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:00.751792 | orchestrator | 2025-05-19 19:56:00 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:00.753207 | orchestrator | 2025-05-19 19:56:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:00.754131 | orchestrator | 2025-05-19 19:56:00 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:56:00.755978 | orchestrator | 2025-05-19 19:56:00 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:00.756028 | orchestrator | 2025-05-19 19:56:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:03.793034 | orchestrator | 2025-05-19 19:56:03 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:03.793124 | orchestrator | 2025-05-19 19:56:03 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:03.793622 | orchestrator | 2025-05-19 19:56:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:03.794216 | orchestrator | 2025-05-19 19:56:03 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:56:03.794817 | orchestrator | 2025-05-19 19:56:03 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:03.794843 | orchestrator | 2025-05-19 19:56:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:06.821783 | orchestrator | 2025-05-19 19:56:06 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:06.821980 | orchestrator | 2025-05-19 19:56:06 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:06.822011 | orchestrator | 2025-05-19 19:56:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:06.823759 | orchestrator | 2025-05-19 19:56:06 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:56:06.823791 | orchestrator | 2025-05-19 19:56:06 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:06.823802 | orchestrator | 2025-05-19 19:56:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:09.858615 | orchestrator | 2025-05-19 19:56:09 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:09.858734 | orchestrator | 2025-05-19 19:56:09 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:09.858762 | orchestrator | 2025-05-19 19:56:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:09.860423 | orchestrator | 2025-05-19 19:56:09 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:56:09.860463 | orchestrator | 2025-05-19 19:56:09 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:09.860482 | orchestrator | 2025-05-19 19:56:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:12.893076 | orchestrator | 2025-05-19 19:56:12 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:12.897884 | orchestrator | 2025-05-19 19:56:12 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:12.897995 | orchestrator | 2025-05-19 19:56:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:12.899286 | orchestrator | 2025-05-19 19:56:12 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:56:12.899929 | orchestrator | 2025-05-19 19:56:12 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:12.899975 | orchestrator | 2025-05-19 19:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:15.931178 | orchestrator | 2025-05-19 19:56:15 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:15.931299 | orchestrator | 2025-05-19 19:56:15 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:15.931311 | orchestrator | 2025-05-19 19:56:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:15.931320 | orchestrator | 2025-05-19 19:56:15 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state STARTED 2025-05-19 19:56:15.931325 | orchestrator | 2025-05-19 19:56:15 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:15.931331 | orchestrator | 2025-05-19 19:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:18.957794 | orchestrator | 2025-05-19 19:56:18 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:18.958319 | orchestrator | 2025-05-19 19:56:18 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:18.962656 | orchestrator | 2025-05-19 19:56:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:18.962734 | orchestrator | 2025-05-19 19:56:18 | INFO  | Task 4fc1a341-e6e9-4431-ac59-4bd8c6fa005d is in state SUCCESS 2025-05-19 19:56:18.962754 | orchestrator | 2025-05-19 19:56:18 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:18.962784 | orchestrator | 2025-05-19 19:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:18.964188 | orchestrator | 2025-05-19 19:56:18.964240 | orchestrator | 2025-05-19 19:56:18.964252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:56:18.964264 | orchestrator | 2025-05-19 19:56:18.964275 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:56:18.964286 | orchestrator | Monday 19 May 2025 19:54:53 +0000 (0:00:00.262) 0:00:00.262 ************ 2025-05-19 19:56:18.964298 | orchestrator | ok: [testbed-manager] 2025-05-19 19:56:18.964328 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:56:18.964375 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:56:18.964394 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:56:18.964405 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:56:18.964416 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:56:18.964427 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:56:18.964438 | orchestrator | 2025-05-19 19:56:18.964449 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:56:18.964490 | orchestrator | Monday 19 May 2025 19:54:54 +0000 (0:00:00.715) 0:00:00.978 ************ 2025-05-19 19:56:18.964511 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964528 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964546 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964563 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964579 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964596 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964614 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-19 19:56:18.964632 | orchestrator | 2025-05-19 19:56:18.964651 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-19 19:56:18.964668 | orchestrator | 2025-05-19 19:56:18.964687 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-19 19:56:18.964707 | orchestrator | Monday 19 May 2025 19:54:54 +0000 (0:00:00.820) 0:00:01.798 ************ 2025-05-19 19:56:18.964727 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:56:18.964747 | orchestrator | 2025-05-19 19:56:18.964765 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-19 19:56:18.964785 | orchestrator | Monday 19 May 2025 19:54:58 +0000 (0:00:03.120) 0:00:04.919 ************ 2025-05-19 19:56:18.964805 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-05-19 19:56:18.964825 | orchestrator | 2025-05-19 19:56:18.964838 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-19 19:56:18.964849 | orchestrator | Monday 19 May 2025 19:55:02 +0000 (0:00:04.148) 0:00:09.068 ************ 2025-05-19 19:56:18.964860 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-19 19:56:18.964873 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-19 19:56:18.964884 | orchestrator | 2025-05-19 19:56:18.964895 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-19 19:56:18.964906 | orchestrator | Monday 19 May 2025 19:55:08 +0000 (0:00:05.957) 0:00:15.026 ************ 2025-05-19 19:56:18.964916 | orchestrator | ok: [testbed-manager] => (item=service) 2025-05-19 19:56:18.964927 | orchestrator | 2025-05-19 19:56:18.964938 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-19 19:56:18.964948 | orchestrator | Monday 19 May 2025 19:55:10 +0000 (0:00:02.830) 0:00:17.857 ************ 2025-05-19 19:56:18.964959 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:56:18.964970 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-05-19 19:56:18.964980 | orchestrator | 2025-05-19 19:56:18.964991 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-19 19:56:18.965001 | orchestrator | Monday 19 May 2025 19:55:14 +0000 (0:00:03.644) 0:00:21.502 ************ 2025-05-19 19:56:18.965012 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-05-19 19:56:18.965023 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-05-19 19:56:18.965033 | orchestrator | 2025-05-19 19:56:18.965044 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-19 19:56:18.965055 | orchestrator | Monday 19 May 2025 19:55:20 +0000 (0:00:05.691) 0:00:27.193 ************ 2025-05-19 19:56:18.965065 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-05-19 19:56:18.965076 | orchestrator | 2025-05-19 19:56:18.965086 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:56:18.965097 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965121 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965132 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965144 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965154 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965181 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965193 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 19:56:18.965204 | orchestrator | 2025-05-19 19:56:18.965215 | orchestrator | 2025-05-19 19:56:18.965225 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:56:18.965244 | orchestrator | Monday 19 May 2025 19:55:25 +0000 (0:00:04.888) 0:00:32.082 ************ 2025-05-19 19:56:18.965255 | orchestrator | =============================================================================== 2025-05-19 19:56:18.965266 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.96s 2025-05-19 19:56:18.965277 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.69s 2025-05-19 19:56:18.965288 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.89s 2025-05-19 19:56:18.965298 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.15s 2025-05-19 19:56:18.965309 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.64s 2025-05-19 19:56:18.965319 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.12s 2025-05-19 19:56:18.965330 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.83s 2025-05-19 19:56:18.965384 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-05-19 19:56:18.965396 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-05-19 19:56:18.965406 | orchestrator | 2025-05-19 19:56:18.965417 | orchestrator | 2025-05-19 19:56:18.965428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:56:18.965439 | orchestrator | 2025-05-19 19:56:18.965450 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:56:18.965460 | orchestrator | Monday 19 May 2025 19:54:02 +0000 (0:00:00.413) 0:00:00.413 ************ 2025-05-19 19:56:18.965471 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:56:18.965482 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:56:18.965493 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:56:18.965503 | orchestrator | 2025-05-19 19:56:18.965514 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:56:18.965525 | orchestrator | Monday 19 May 2025 19:54:03 +0000 (0:00:00.527) 0:00:00.940 ************ 2025-05-19 19:56:18.965536 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-19 19:56:18.965547 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-19 19:56:18.965558 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-19 19:56:18.965569 | orchestrator | 2025-05-19 19:56:18.965580 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-19 19:56:18.965591 | orchestrator | 2025-05-19 19:56:18.965602 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 19:56:18.965613 | orchestrator | Monday 19 May 2025 19:54:03 +0000 (0:00:00.525) 0:00:01.465 ************ 2025-05-19 19:56:18.965623 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:56:18.965642 | orchestrator | 2025-05-19 19:56:18.965653 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-19 19:56:18.965664 | orchestrator | Monday 19 May 2025 19:54:04 +0000 (0:00:01.102) 0:00:02.568 ************ 2025-05-19 19:56:18.965675 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-19 19:56:18.965686 | orchestrator | 2025-05-19 19:56:18.965697 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-19 19:56:18.965707 | orchestrator | Monday 19 May 2025 19:54:08 +0000 (0:00:04.127) 0:00:06.695 ************ 2025-05-19 19:56:18.965718 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-19 19:56:18.965729 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-19 19:56:18.965740 | orchestrator | 2025-05-19 19:56:18.965757 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-19 19:56:18.965775 | orchestrator | Monday 19 May 2025 19:54:15 +0000 (0:00:06.788) 0:00:13.484 ************ 2025-05-19 19:56:18.965794 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 19:56:18.965812 | orchestrator | 2025-05-19 19:56:18.965830 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-19 19:56:18.965848 | orchestrator | Monday 19 May 2025 19:54:19 +0000 (0:00:03.436) 0:00:16.921 ************ 2025-05-19 19:56:18.965867 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:56:18.965886 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-19 19:56:18.965905 | orchestrator | 2025-05-19 19:56:18.965924 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-19 19:56:18.965942 | orchestrator | Monday 19 May 2025 19:54:23 +0000 (0:00:04.441) 0:00:21.362 ************ 2025-05-19 19:56:18.965959 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:56:18.965971 | orchestrator | 2025-05-19 19:56:18.965982 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-19 19:56:18.965993 | orchestrator | Monday 19 May 2025 19:54:27 +0000 (0:00:03.731) 0:00:25.094 ************ 2025-05-19 19:56:18.966003 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-19 19:56:18.966068 | orchestrator | 2025-05-19 19:56:18.966083 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-19 19:56:18.966094 | orchestrator | Monday 19 May 2025 19:54:31 +0000 (0:00:04.655) 0:00:29.749 ************ 2025-05-19 19:56:18.966105 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.966115 | orchestrator | 2025-05-19 19:56:18.966126 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-19 19:56:18.966147 | orchestrator | Monday 19 May 2025 19:54:35 +0000 (0:00:03.800) 0:00:33.549 ************ 2025-05-19 19:56:18.966158 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.966169 | orchestrator | 2025-05-19 19:56:18.966180 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-19 19:56:18.966190 | orchestrator | Monday 19 May 2025 19:54:40 +0000 (0:00:05.094) 0:00:38.644 ************ 2025-05-19 19:56:18.966201 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.966211 | orchestrator | 2025-05-19 19:56:18.966230 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-19 19:56:18.966241 | orchestrator | Monday 19 May 2025 19:54:44 +0000 (0:00:03.980) 0:00:42.625 ************ 2025-05-19 19:56:18.966256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.966283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.966295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.966307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.966486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.966540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.966564 | orchestrator | 2025-05-19 19:56:18.966594 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-19 19:56:18.966617 | orchestrator | Monday 19 May 2025 19:54:46 +0000 (0:00:01.789) 0:00:44.414 ************ 2025-05-19 19:56:18.966629 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.966640 | orchestrator | 2025-05-19 19:56:18.966651 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-19 19:56:18.966661 | orchestrator | Monday 19 May 2025 19:54:46 +0000 (0:00:00.294) 0:00:44.709 ************ 2025-05-19 19:56:18.966672 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.966683 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.966693 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.966704 | orchestrator | 2025-05-19 19:56:18.966715 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-19 19:56:18.966726 | orchestrator | Monday 19 May 2025 19:54:47 +0000 (0:00:00.940) 0:00:45.649 ************ 2025-05-19 19:56:18.966736 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:56:18.966747 | orchestrator | 2025-05-19 19:56:18.966758 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-19 19:56:18.966768 | orchestrator | Monday 19 May 2025 19:54:49 +0000 (0:00:01.327) 0:00:46.977 ************ 2025-05-19 19:56:18.966780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.966792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.966805 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.966838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.966855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.966866 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.966876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.966886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.966896 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.966906 | orchestrator | 2025-05-19 19:56:18.966916 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-19 19:56:18.966925 | orchestrator | Monday 19 May 2025 19:54:50 +0000 (0:00:01.176) 0:00:48.153 ************ 2025-05-19 19:56:18.966935 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.966944 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.966954 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.966963 | orchestrator | 2025-05-19 19:56:18.966973 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 19:56:18.966982 | orchestrator | Monday 19 May 2025 19:54:50 +0000 (0:00:00.363) 0:00:48.517 ************ 2025-05-19 19:56:18.966992 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:56:18.967002 | orchestrator | 2025-05-19 19:56:18.967011 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-19 19:56:18.967027 | orchestrator | Monday 19 May 2025 19:54:51 +0000 (0:00:01.041) 0:00:49.559 ************ 2025-05-19 19:56:18.967057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967138 | orchestrator | 2025-05-19 19:56:18.967148 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-19 19:56:18.967158 | orchestrator | Monday 19 May 2025 19:54:54 +0000 (0:00:02.528) 0:00:52.087 ************ 2025-05-19 19:56:18.967168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967188 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.967199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967233 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.967247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967268 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.967277 | orchestrator | 2025-05-19 19:56:18.967287 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-19 19:56:18.967297 | orchestrator | Monday 19 May 2025 19:54:55 +0000 (0:00:00.758) 0:00:52.845 ************ 2025-05-19 19:56:18.967307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967355 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.967389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967425 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.967445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967475 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.967485 | orchestrator | 2025-05-19 19:56:18.967495 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-19 19:56:18.967504 | orchestrator | Monday 19 May 2025 19:54:57 +0000 (0:00:02.156) 0:00:55.002 ************ 2025-05-19 19:56:18.967514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967597 | orchestrator | 2025-05-19 19:56:18.967614 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-19 19:56:18.967624 | orchestrator | Monday 19 May 2025 19:55:01 +0000 (0:00:03.991) 0:00:58.993 ************ 2025-05-19 19:56:18.967639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967719 | orchestrator | 2025-05-19 19:56:18.967729 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-19 19:56:18.967739 | orchestrator | Monday 19 May 2025 19:55:13 +0000 (0:00:11.809) 0:01:10.803 ************ 2025-05-19 19:56:18.967749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967774 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.967785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967818 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.967828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 19:56:18.967838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 19:56:18.967858 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.967868 | orchestrator | 2025-05-19 19:56:18.967877 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-19 19:56:18.967887 | orchestrator | Monday 19 May 2025 19:55:14 +0000 (0:00:01.653) 0:01:12.456 ************ 2025-05-19 19:56:18.967897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 19:56:18.967940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 19:56:18.967976 | orchestrator | 2025-05-19 19:56:18.967986 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 19:56:18.967996 | orchestrator | Monday 19 May 2025 19:55:17 +0000 (0:00:02.867) 0:01:15.323 ************ 2025-05-19 19:56:18.968005 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:56:18.968015 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:56:18.968024 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:56:18.968034 | orchestrator | 2025-05-19 19:56:18.968044 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-19 19:56:18.968053 | orchestrator | Monday 19 May 2025 19:55:17 +0000 (0:00:00.257) 0:01:15.581 ************ 2025-05-19 19:56:18.968062 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.968072 | orchestrator | 2025-05-19 19:56:18.968081 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-19 19:56:18.968091 | orchestrator | Monday 19 May 2025 19:55:20 +0000 (0:00:02.429) 0:01:18.010 ************ 2025-05-19 19:56:18.968100 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.968110 | orchestrator | 2025-05-19 19:56:18.968119 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-19 19:56:18.968129 | orchestrator | Monday 19 May 2025 19:55:22 +0000 (0:00:02.530) 0:01:20.540 ************ 2025-05-19 19:56:18.968138 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.968148 | orchestrator | 2025-05-19 19:56:18.968164 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 19:56:18.968173 | orchestrator | Monday 19 May 2025 19:55:42 +0000 (0:00:19.330) 0:01:39.871 ************ 2025-05-19 19:56:18.968183 | orchestrator | 2025-05-19 19:56:18.968193 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 19:56:18.968207 | orchestrator | Monday 19 May 2025 19:55:42 +0000 (0:00:00.083) 0:01:39.954 ************ 2025-05-19 19:56:18.968217 | orchestrator | 2025-05-19 19:56:18.968226 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 19:56:18.968236 | orchestrator | Monday 19 May 2025 19:55:42 +0000 (0:00:00.212) 0:01:40.167 ************ 2025-05-19 19:56:18.968245 | orchestrator | 2025-05-19 19:56:18.968255 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-19 19:56:18.968264 | orchestrator | Monday 19 May 2025 19:55:42 +0000 (0:00:00.061) 0:01:40.229 ************ 2025-05-19 19:56:18.968273 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.968283 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:56:18.968292 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:56:18.968302 | orchestrator | 2025-05-19 19:56:18.968311 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-19 19:56:18.968327 | orchestrator | Monday 19 May 2025 19:56:01 +0000 (0:00:19.533) 0:01:59.762 ************ 2025-05-19 19:56:18.968361 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:56:18.968371 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:56:18.968381 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:56:18.968391 | orchestrator | 2025-05-19 19:56:18.968400 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:56:18.968411 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 19:56:18.968421 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:56:18.968431 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:56:18.968441 | orchestrator | 2025-05-19 19:56:18.968450 | orchestrator | 2025-05-19 19:56:18.968460 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:56:18.968469 | orchestrator | Monday 19 May 2025 19:56:18 +0000 (0:00:16.270) 0:02:16.032 ************ 2025-05-19 19:56:18.968479 | orchestrator | =============================================================================== 2025-05-19 19:56:18.968489 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.53s 2025-05-19 19:56:18.968499 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 19.33s 2025-05-19 19:56:18.968508 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.27s 2025-05-19 19:56:18.968518 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 11.81s 2025-05-19 19:56:18.968527 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.79s 2025-05-19 19:56:18.968537 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 5.09s 2025-05-19 19:56:18.968546 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.66s 2025-05-19 19:56:18.968556 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.44s 2025-05-19 19:56:18.968566 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.13s 2025-05-19 19:56:18.968575 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.99s 2025-05-19 19:56:18.968585 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.98s 2025-05-19 19:56:18.968594 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.80s 2025-05-19 19:56:18.968604 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.73s 2025-05-19 19:56:18.968613 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.44s 2025-05-19 19:56:18.968623 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.87s 2025-05-19 19:56:18.968632 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.53s 2025-05-19 19:56:18.968642 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.53s 2025-05-19 19:56:18.968651 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.43s 2025-05-19 19:56:18.968661 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.16s 2025-05-19 19:56:18.968670 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.79s 2025-05-19 19:56:21.992520 | orchestrator | 2025-05-19 19:56:21 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:21.993804 | orchestrator | 2025-05-19 19:56:21 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:21.994185 | orchestrator | 2025-05-19 19:56:21 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:21.994661 | orchestrator | 2025-05-19 19:56:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:21.995253 | orchestrator | 2025-05-19 19:56:21 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:21.995276 | orchestrator | 2025-05-19 19:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:25.033059 | orchestrator | 2025-05-19 19:56:25 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:25.033547 | orchestrator | 2025-05-19 19:56:25 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:25.035413 | orchestrator | 2025-05-19 19:56:25 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:25.044873 | orchestrator | 2025-05-19 19:56:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:25.046974 | orchestrator | 2025-05-19 19:56:25 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:25.047013 | orchestrator | 2025-05-19 19:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:28.096988 | orchestrator | 2025-05-19 19:56:28 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:28.098128 | orchestrator | 2025-05-19 19:56:28 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:28.099974 | orchestrator | 2025-05-19 19:56:28 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:28.103153 | orchestrator | 2025-05-19 19:56:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:28.104015 | orchestrator | 2025-05-19 19:56:28 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:28.104646 | orchestrator | 2025-05-19 19:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:31.148117 | orchestrator | 2025-05-19 19:56:31 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:31.149290 | orchestrator | 2025-05-19 19:56:31 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:31.150690 | orchestrator | 2025-05-19 19:56:31 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:31.156357 | orchestrator | 2025-05-19 19:56:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:31.158134 | orchestrator | 2025-05-19 19:56:31 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:31.158161 | orchestrator | 2025-05-19 19:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:34.204134 | orchestrator | 2025-05-19 19:56:34 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:34.204445 | orchestrator | 2025-05-19 19:56:34 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:34.204801 | orchestrator | 2025-05-19 19:56:34 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:34.206904 | orchestrator | 2025-05-19 19:56:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:34.207262 | orchestrator | 2025-05-19 19:56:34 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:34.207284 | orchestrator | 2025-05-19 19:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:37.241519 | orchestrator | 2025-05-19 19:56:37 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:37.241600 | orchestrator | 2025-05-19 19:56:37 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:37.242064 | orchestrator | 2025-05-19 19:56:37 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:37.242518 | orchestrator | 2025-05-19 19:56:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:37.243588 | orchestrator | 2025-05-19 19:56:37 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:37.243642 | orchestrator | 2025-05-19 19:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:40.289751 | orchestrator | 2025-05-19 19:56:40 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:40.289863 | orchestrator | 2025-05-19 19:56:40 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:40.295032 | orchestrator | 2025-05-19 19:56:40 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:40.295125 | orchestrator | 2025-05-19 19:56:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:40.295134 | orchestrator | 2025-05-19 19:56:40 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:40.295142 | orchestrator | 2025-05-19 19:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:43.338797 | orchestrator | 2025-05-19 19:56:43 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:43.339556 | orchestrator | 2025-05-19 19:56:43 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:43.340759 | orchestrator | 2025-05-19 19:56:43 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:43.341725 | orchestrator | 2025-05-19 19:56:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:43.342807 | orchestrator | 2025-05-19 19:56:43 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:43.342967 | orchestrator | 2025-05-19 19:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:46.377917 | orchestrator | 2025-05-19 19:56:46 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:46.378747 | orchestrator | 2025-05-19 19:56:46 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:46.380239 | orchestrator | 2025-05-19 19:56:46 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:46.381188 | orchestrator | 2025-05-19 19:56:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:46.383167 | orchestrator | 2025-05-19 19:56:46 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:46.383235 | orchestrator | 2025-05-19 19:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:49.422262 | orchestrator | 2025-05-19 19:56:49 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:49.422377 | orchestrator | 2025-05-19 19:56:49 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:49.423633 | orchestrator | 2025-05-19 19:56:49 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:49.425566 | orchestrator | 2025-05-19 19:56:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:49.427453 | orchestrator | 2025-05-19 19:56:49 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:49.427484 | orchestrator | 2025-05-19 19:56:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:52.479904 | orchestrator | 2025-05-19 19:56:52 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:52.482061 | orchestrator | 2025-05-19 19:56:52 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:52.484229 | orchestrator | 2025-05-19 19:56:52 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:52.485903 | orchestrator | 2025-05-19 19:56:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:52.487973 | orchestrator | 2025-05-19 19:56:52 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:52.488015 | orchestrator | 2025-05-19 19:56:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:55.529750 | orchestrator | 2025-05-19 19:56:55 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:55.532191 | orchestrator | 2025-05-19 19:56:55 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:55.532249 | orchestrator | 2025-05-19 19:56:55 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:55.532424 | orchestrator | 2025-05-19 19:56:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:55.533838 | orchestrator | 2025-05-19 19:56:55 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:55.533947 | orchestrator | 2025-05-19 19:56:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:56:58.572718 | orchestrator | 2025-05-19 19:56:58 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:56:58.574388 | orchestrator | 2025-05-19 19:56:58 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:56:58.574623 | orchestrator | 2025-05-19 19:56:58 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:56:58.574744 | orchestrator | 2025-05-19 19:56:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:56:58.577351 | orchestrator | 2025-05-19 19:56:58 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:56:58.577421 | orchestrator | 2025-05-19 19:56:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:01.627506 | orchestrator | 2025-05-19 19:57:01 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:01.627632 | orchestrator | 2025-05-19 19:57:01 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:01.628029 | orchestrator | 2025-05-19 19:57:01 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:57:01.628599 | orchestrator | 2025-05-19 19:57:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:01.632087 | orchestrator | 2025-05-19 19:57:01 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:01.632152 | orchestrator | 2025-05-19 19:57:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:04.664816 | orchestrator | 2025-05-19 19:57:04 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:04.664920 | orchestrator | 2025-05-19 19:57:04 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:04.666910 | orchestrator | 2025-05-19 19:57:04 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state STARTED 2025-05-19 19:57:04.666997 | orchestrator | 2025-05-19 19:57:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:04.667546 | orchestrator | 2025-05-19 19:57:04 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:04.667578 | orchestrator | 2025-05-19 19:57:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:07.693970 | orchestrator | 2025-05-19 19:57:07 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:07.696237 | orchestrator | 2025-05-19 19:57:07 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:07.696353 | orchestrator | 2025-05-19 19:57:07 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:07.698813 | orchestrator | 2025-05-19 19:57:07 | INFO  | Task bf9ac193-9a02-4215-b9f2-46115d0778b0 is in state SUCCESS 2025-05-19 19:57:07.700351 | orchestrator | 2025-05-19 19:57:07.700395 | orchestrator | 2025-05-19 19:57:07.700408 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:57:07.700523 | orchestrator | 2025-05-19 19:57:07.700594 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:57:07.700610 | orchestrator | Monday 19 May 2025 19:51:45 +0000 (0:00:00.376) 0:00:00.376 ************ 2025-05-19 19:57:07.700621 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:57:07.700666 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:57:07.700679 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:57:07.700981 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:57:07.700999 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:57:07.701011 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:57:07.701023 | orchestrator | 2025-05-19 19:57:07.701034 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:57:07.701045 | orchestrator | Monday 19 May 2025 19:51:46 +0000 (0:00:00.751) 0:00:01.128 ************ 2025-05-19 19:57:07.701056 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-19 19:57:07.701067 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-19 19:57:07.701078 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-19 19:57:07.701089 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-19 19:57:07.701100 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-19 19:57:07.701110 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-19 19:57:07.701121 | orchestrator | 2025-05-19 19:57:07.701248 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-19 19:57:07.701291 | orchestrator | 2025-05-19 19:57:07.701303 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 19:57:07.701314 | orchestrator | Monday 19 May 2025 19:51:47 +0000 (0:00:00.697) 0:00:01.825 ************ 2025-05-19 19:57:07.701327 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:57:07.701346 | orchestrator | 2025-05-19 19:57:07.701363 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-19 19:57:07.701381 | orchestrator | Monday 19 May 2025 19:51:48 +0000 (0:00:01.201) 0:00:03.026 ************ 2025-05-19 19:57:07.701399 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:57:07.701417 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:57:07.701437 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:57:07.701455 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:57:07.701472 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:57:07.701483 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:57:07.701494 | orchestrator | 2025-05-19 19:57:07.701890 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-19 19:57:07.701906 | orchestrator | Monday 19 May 2025 19:51:49 +0000 (0:00:01.199) 0:00:04.226 ************ 2025-05-19 19:57:07.701925 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:57:07.701945 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:57:07.701963 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:57:07.702006 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:57:07.702070 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:57:07.702084 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:57:07.702095 | orchestrator | 2025-05-19 19:57:07.702106 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-19 19:57:07.702117 | orchestrator | Monday 19 May 2025 19:51:50 +0000 (0:00:01.062) 0:00:05.288 ************ 2025-05-19 19:57:07.702792 | orchestrator | ok: [testbed-node-0] => { 2025-05-19 19:57:07.702810 | orchestrator |  "changed": false, 2025-05-19 19:57:07.702835 | orchestrator |  "msg": "All assertions passed" 2025-05-19 19:57:07.702942 | orchestrator | } 2025-05-19 19:57:07.703670 | orchestrator | ok: [testbed-node-1] => { 2025-05-19 19:57:07.703703 | orchestrator |  "changed": false, 2025-05-19 19:57:07.703722 | orchestrator |  "msg": "All assertions passed" 2025-05-19 19:57:07.703742 | orchestrator | } 2025-05-19 19:57:07.703759 | orchestrator | ok: [testbed-node-2] => { 2025-05-19 19:57:07.703774 | orchestrator |  "changed": false, 2025-05-19 19:57:07.703807 | orchestrator |  "msg": "All assertions passed" 2025-05-19 19:57:07.703823 | orchestrator | } 2025-05-19 19:57:07.703840 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 19:57:07.703855 | orchestrator |  "changed": false, 2025-05-19 19:57:07.703871 | orchestrator |  "msg": "All assertions passed" 2025-05-19 19:57:07.703881 | orchestrator | } 2025-05-19 19:57:07.703903 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 19:57:07.703912 | orchestrator |  "changed": false, 2025-05-19 19:57:07.703922 | orchestrator |  "msg": "All assertions passed" 2025-05-19 19:57:07.703932 | orchestrator | } 2025-05-19 19:57:07.703941 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 19:57:07.703951 | orchestrator |  "changed": false, 2025-05-19 19:57:07.703960 | orchestrator |  "msg": "All assertions passed" 2025-05-19 19:57:07.703970 | orchestrator | } 2025-05-19 19:57:07.703980 | orchestrator | 2025-05-19 19:57:07.704028 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-19 19:57:07.704041 | orchestrator | Monday 19 May 2025 19:51:51 +0000 (0:00:00.618) 0:00:05.906 ************ 2025-05-19 19:57:07.704051 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.704061 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.704315 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.704337 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.704355 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.704373 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.704389 | orchestrator | 2025-05-19 19:57:07.704405 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-19 19:57:07.704422 | orchestrator | Monday 19 May 2025 19:51:52 +0000 (0:00:00.679) 0:00:06.585 ************ 2025-05-19 19:57:07.704438 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-19 19:57:07.704455 | orchestrator | 2025-05-19 19:57:07.704465 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-19 19:57:07.704475 | orchestrator | Monday 19 May 2025 19:51:55 +0000 (0:00:03.656) 0:00:10.242 ************ 2025-05-19 19:57:07.704485 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-19 19:57:07.704496 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-19 19:57:07.704506 | orchestrator | 2025-05-19 19:57:07.704646 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-19 19:57:07.704664 | orchestrator | Monday 19 May 2025 19:52:02 +0000 (0:00:06.664) 0:00:16.907 ************ 2025-05-19 19:57:07.704675 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 19:57:07.704982 | orchestrator | 2025-05-19 19:57:07.704994 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-19 19:57:07.705004 | orchestrator | Monday 19 May 2025 19:52:06 +0000 (0:00:03.608) 0:00:20.515 ************ 2025-05-19 19:57:07.705014 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:57:07.705040 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-19 19:57:07.705050 | orchestrator | 2025-05-19 19:57:07.705059 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-19 19:57:07.705069 | orchestrator | Monday 19 May 2025 19:52:09 +0000 (0:00:03.919) 0:00:24.435 ************ 2025-05-19 19:57:07.705079 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:57:07.705088 | orchestrator | 2025-05-19 19:57:07.705098 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-19 19:57:07.705108 | orchestrator | Monday 19 May 2025 19:52:12 +0000 (0:00:03.008) 0:00:27.443 ************ 2025-05-19 19:57:07.705118 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-19 19:57:07.705127 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-19 19:57:07.705137 | orchestrator | 2025-05-19 19:57:07.705146 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 19:57:07.705156 | orchestrator | Monday 19 May 2025 19:52:20 +0000 (0:00:07.999) 0:00:35.443 ************ 2025-05-19 19:57:07.705166 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.705175 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.705185 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.705194 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.705204 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.705213 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.705223 | orchestrator | 2025-05-19 19:57:07.705232 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-19 19:57:07.705242 | orchestrator | Monday 19 May 2025 19:52:21 +0000 (0:00:00.780) 0:00:36.223 ************ 2025-05-19 19:57:07.705252 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.705390 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.705404 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.705969 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.705983 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.705991 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.705999 | orchestrator | 2025-05-19 19:57:07.706007 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-19 19:57:07.706040 | orchestrator | Monday 19 May 2025 19:52:25 +0000 (0:00:04.085) 0:00:40.309 ************ 2025-05-19 19:57:07.706050 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:57:07.706058 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:57:07.706066 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:57:07.706074 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:57:07.706082 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:57:07.706089 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:57:07.706097 | orchestrator | 2025-05-19 19:57:07.706105 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-19 19:57:07.706113 | orchestrator | Monday 19 May 2025 19:52:28 +0000 (0:00:02.203) 0:00:42.513 ************ 2025-05-19 19:57:07.706121 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.706139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.706147 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.706155 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.706163 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.706171 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.706178 | orchestrator | 2025-05-19 19:57:07.706186 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-19 19:57:07.706194 | orchestrator | Monday 19 May 2025 19:52:32 +0000 (0:00:04.376) 0:00:46.889 ************ 2025-05-19 19:57:07.706206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.706345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.706398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.706478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.706491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.706513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.706616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.706633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.706646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.706718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.706964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.706978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.706987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.707020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.707082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707094 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.707103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.707135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.707341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.707654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.707667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.707682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.707767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.707816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.707824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.707883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.707903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.707911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.707929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.707938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.708528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.708679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.708690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.708699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.708778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.708785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.708877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.708956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.708964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.708977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.708993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.709074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.709092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.709099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.709156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.709176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.709187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.709243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.709253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.709316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.709328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.709335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.709342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.710231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.710401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.710433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.710472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.710488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.710519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.710543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.710555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.710573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.710586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.710598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.710609 | orchestrator | 2025-05-19 19:57:07.710622 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-19 19:57:07.710634 | orchestrator | Monday 19 May 2025 19:52:36 +0000 (0:00:03.736) 0:00:50.625 ************ 2025-05-19 19:57:07.710646 | orchestrator | [WARNING]: Skipped 2025-05-19 19:57:07.710658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-19 19:57:07.710677 | orchestrator | due to this access issue: 2025-05-19 19:57:07.710700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-19 19:57:07.710711 | orchestrator | a directory 2025-05-19 19:57:07.710723 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:57:07.710734 | orchestrator | 2025-05-19 19:57:07.710745 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 19:57:07.710781 | orchestrator | Monday 19 May 2025 19:52:36 +0000 (0:00:00.732) 0:00:51.357 ************ 2025-05-19 19:57:07.710793 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:57:07.710806 | orchestrator | 2025-05-19 19:57:07.710817 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-19 19:57:07.710829 | orchestrator | Monday 19 May 2025 19:52:39 +0000 (0:00:02.415) 0:00:53.773 ************ 2025-05-19 19:57:07.710840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.710853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.710869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.710882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.710908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.710936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.710947 | orchestrator | 2025-05-19 19:57:07.710969 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-19 19:57:07.710980 | orchestrator | Monday 19 May 2025 19:52:46 +0000 (0:00:07.295) 0:01:01.068 ************ 2025-05-19 19:57:07.710998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711010 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.711021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711039 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.711058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711070 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.711082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711093 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.711104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711115 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.711131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711143 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.711154 | orchestrator | 2025-05-19 19:57:07.711165 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-19 19:57:07.711176 | orchestrator | Monday 19 May 2025 19:52:50 +0000 (0:00:03.822) 0:01:04.891 ************ 2025-05-19 19:57:07.711188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711205 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.711224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711236 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.711247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711258 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.711308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711328 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.711346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711376 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.711395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711416 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.711434 | orchestrator | 2025-05-19 19:57:07.711475 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-19 19:57:07.711488 | orchestrator | Monday 19 May 2025 19:52:53 +0000 (0:00:03.527) 0:01:08.418 ************ 2025-05-19 19:57:07.711499 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.711509 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.711520 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.711530 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.711541 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.711552 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.711563 | orchestrator | 2025-05-19 19:57:07.711573 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-19 19:57:07.711584 | orchestrator | Monday 19 May 2025 19:52:58 +0000 (0:00:04.183) 0:01:12.602 ************ 2025-05-19 19:57:07.711595 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.711606 | orchestrator | 2025-05-19 19:57:07.711617 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-19 19:57:07.711628 | orchestrator | Monday 19 May 2025 19:52:58 +0000 (0:00:00.100) 0:01:12.703 ************ 2025-05-19 19:57:07.711638 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.711649 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.711660 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.711670 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.711681 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.711691 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.711702 | orchestrator | 2025-05-19 19:57:07.711713 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-19 19:57:07.711724 | orchestrator | Monday 19 May 2025 19:52:58 +0000 (0:00:00.555) 0:01:13.258 ************ 2025-05-19 19:57:07.711735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.711767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.711823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.711860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.711872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.711925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.711948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.711960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.711973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.711992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712003 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.712015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.712038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.712112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.712239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.712337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.712393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.712412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712424 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.712435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.712454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.712512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.712581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.712613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.712665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.712677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712688 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.712707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.712726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.712785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.712861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.712891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.712909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.712937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.712949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.712960 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.712978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.712997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.713048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.713084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.713096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.713123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.713147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.713171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.713195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.713211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713222 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.713234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.713257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.713377 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.713405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.714185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.714199 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.714308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.714318 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714326 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.714356 | orchestrator | 2025-05-19 19:57:07.714364 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-19 19:57:07.714371 | orchestrator | Monday 19 May 2025 19:53:02 +0000 (0:00:03.767) 0:01:17.026 ************ 2025-05-19 19:57:07.714378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.714395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.714425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.714467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.714487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.714511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.714518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.714539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.714593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.714635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.714672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.714716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.714735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.714762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.714768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.714785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.714899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.714940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.714976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.714983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.714996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.715013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.715054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.715071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.715078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.715094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.715111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.715118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.715141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.715148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.715164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.715182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.715192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.715199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.718088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.718150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.718171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.718207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.718230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.718252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.718293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718307 | orchestrator | 2025-05-19 19:57:07.718314 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-19 19:57:07.718321 | orchestrator | Monday 19 May 2025 19:53:07 +0000 (0:00:04.498) 0:01:21.524 ************ 2025-05-19 19:57:07.718338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.718345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.718386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.718423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.718462 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.718507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.718544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.718584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.718618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.718675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.718704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.718734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.718764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.718823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.718856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.718886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.718916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.718962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.718975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.718989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.718996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.719003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.719014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.719027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.719047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.719054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.719071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.719078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.719097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.719114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.719121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.719141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.719148 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.719165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.719182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.719192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.719209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.719216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719223 | orchestrator | 2025-05-19 19:57:07.719229 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-19 19:57:07.719239 | orchestrator | Monday 19 May 2025 19:53:15 +0000 (0:00:08.376) 0:01:29.900 ************ 2025-05-19 19:57:07.719246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.719258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.719313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.719330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.719336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.719355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.719373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.719380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.719389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.719396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.721645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721780 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.721799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.721814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.721902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.721928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.721946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.721969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.721994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.722007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.722110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.722133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.722154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722168 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.722209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.722312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.722384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.722424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.722465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.722477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.722512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.722595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.722659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.722695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722719 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.722736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.722747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722766 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.722778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.722796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.722854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.722921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.722951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.722970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.722982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.722993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.723046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.723085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.723200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.723241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723330 | orchestrator | 2025-05-19 19:57:07.723345 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-19 19:57:07.723357 | orchestrator | Monday 19 May 2025 19:53:20 +0000 (0:00:05.186) 0:01:35.087 ************ 2025-05-19 19:57:07.723373 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:57:07.723385 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.723395 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.723406 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.723416 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:57:07.723427 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:57:07.723438 | orchestrator | 2025-05-19 19:57:07.723449 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-19 19:57:07.723460 | orchestrator | Monday 19 May 2025 19:53:26 +0000 (0:00:06.353) 0:01:41.441 ************ 2025-05-19 19:57:07.723471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.723492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.723550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723639 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.723651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.723695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723725 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.723741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.723753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.723817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.723920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.723932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.723962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.723980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.723991 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.724007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.724019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.724077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.724155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.724182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.724410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.724440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.724469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.724530 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724542 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.724604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.724627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.724672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.724690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724701 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.724713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.724729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.724829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.724893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.724957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.724978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.724999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.725020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.725072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.725115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.725153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.725258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.725332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.725357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.725408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.725456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.725467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.725497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.725516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.725527 | orchestrator | 2025-05-19 19:57:07.725539 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-19 19:57:07.725554 | orchestrator | Monday 19 May 2025 19:53:35 +0000 (0:00:08.119) 0:01:49.561 ************ 2025-05-19 19:57:07.725573 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.725597 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.725615 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.725631 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.725648 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.725665 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.725682 | orchestrator | 2025-05-19 19:57:07.725700 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-19 19:57:07.725718 | orchestrator | Monday 19 May 2025 19:53:38 +0000 (0:00:03.498) 0:01:53.059 ************ 2025-05-19 19:57:07.725738 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.725756 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.725775 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.725786 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.725796 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.725807 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.725817 | orchestrator | 2025-05-19 19:57:07.725828 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-19 19:57:07.725839 | orchestrator | Monday 19 May 2025 19:53:42 +0000 (0:00:03.667) 0:01:56.726 ************ 2025-05-19 19:57:07.725850 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.725860 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.725871 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.725882 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.725906 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.725917 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.725927 | orchestrator | 2025-05-19 19:57:07.725938 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-19 19:57:07.725949 | orchestrator | Monday 19 May 2025 19:53:44 +0000 (0:00:02.125) 0:01:58.852 ************ 2025-05-19 19:57:07.725960 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.725970 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.725980 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.725991 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.726002 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.726050 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.726074 | orchestrator | 2025-05-19 19:57:07.726091 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-19 19:57:07.726137 | orchestrator | Monday 19 May 2025 19:53:47 +0000 (0:00:02.978) 0:02:01.830 ************ 2025-05-19 19:57:07.726153 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.726170 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.726188 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.726206 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.726223 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.726241 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.726258 | orchestrator | 2025-05-19 19:57:07.726312 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-19 19:57:07.726331 | orchestrator | Monday 19 May 2025 19:53:50 +0000 (0:00:03.391) 0:02:05.221 ************ 2025-05-19 19:57:07.726350 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.726368 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.726386 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.726397 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.726408 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.726419 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.726429 | orchestrator | 2025-05-19 19:57:07.726442 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-19 19:57:07.726461 | orchestrator | Monday 19 May 2025 19:53:54 +0000 (0:00:04.059) 0:02:09.281 ************ 2025-05-19 19:57:07.726486 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 19:57:07.726507 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.726524 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 19:57:07.726541 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.726560 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 19:57:07.726578 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.726597 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 19:57:07.726616 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.726629 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 19:57:07.726640 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.726651 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 19:57:07.726661 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.726672 | orchestrator | 2025-05-19 19:57:07.726682 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-19 19:57:07.726693 | orchestrator | Monday 19 May 2025 19:53:58 +0000 (0:00:03.307) 0:02:12.588 ************ 2025-05-19 19:57:07.726730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.726745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.726808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.726839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.726857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.726886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.726897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.726965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.726985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.727033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.727120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.727159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.727237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727256 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.727364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.727387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.727474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.727494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727522 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.727545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.727564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.727674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.727789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.727825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.727845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.727904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.727921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.727938 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.727962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.727980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.728042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.728108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.728154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.728229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.728246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728257 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.728286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.728303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.728358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.728429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.728455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.728471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.728552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.728568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.728589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728629 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.728640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.728922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.728954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.728971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.728989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.729048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.729160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729189 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.729209 | orchestrator | 2025-05-19 19:57:07.729225 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-19 19:57:07.729242 | orchestrator | Monday 19 May 2025 19:54:01 +0000 (0:00:02.987) 0:02:15.575 ************ 2025-05-19 19:57:07.729258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.729331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.729524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.729570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.729596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.729716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.729744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.729762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.729810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.729954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.729974 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.729985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.730006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.730256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.730335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.730360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.730396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.730501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.730522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.730550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.730561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730571 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.730646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.730669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.730779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.730811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.730821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.730847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.730932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.730957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.730968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.730983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.730995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.731097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.731122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.731227 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.731237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.731247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.731353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.731466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.731490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.731501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.731630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.731641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.731668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.731678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.731750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.731770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.731799 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.731808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.731872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.731893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.731906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731914 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.731923 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.731960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.731987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.732002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.732016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.732025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.732059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.732068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.732076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.732089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.732103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.732112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.732143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.732153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.732162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.732170 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732178 | orchestrator | 2025-05-19 19:57:07.732186 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-19 19:57:07.732194 | orchestrator | Monday 19 May 2025 19:54:03 +0000 (0:00:02.627) 0:02:18.203 ************ 2025-05-19 19:57:07.732202 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732210 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732224 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732232 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.732240 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.732251 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732259 | orchestrator | 2025-05-19 19:57:07.732287 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-19 19:57:07.732295 | orchestrator | Monday 19 May 2025 19:54:06 +0000 (0:00:03.127) 0:02:21.331 ************ 2025-05-19 19:57:07.732303 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732310 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732318 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732326 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:57:07.732333 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:57:07.732341 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:57:07.732349 | orchestrator | 2025-05-19 19:57:07.732357 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-19 19:57:07.732364 | orchestrator | Monday 19 May 2025 19:54:12 +0000 (0:00:05.406) 0:02:26.737 ************ 2025-05-19 19:57:07.732372 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732380 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732387 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732395 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.732403 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732411 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.732426 | orchestrator | 2025-05-19 19:57:07.732438 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-19 19:57:07.732458 | orchestrator | Monday 19 May 2025 19:54:14 +0000 (0:00:01.990) 0:02:28.728 ************ 2025-05-19 19:57:07.732473 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732486 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732499 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732511 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.732523 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732536 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.732550 | orchestrator | 2025-05-19 19:57:07.732564 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-19 19:57:07.732575 | orchestrator | Monday 19 May 2025 19:54:16 +0000 (0:00:02.325) 0:02:31.054 ************ 2025-05-19 19:57:07.732588 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732602 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732614 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732627 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732641 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.732654 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.732668 | orchestrator | 2025-05-19 19:57:07.732681 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-19 19:57:07.732691 | orchestrator | Monday 19 May 2025 19:54:18 +0000 (0:00:02.398) 0:02:33.453 ************ 2025-05-19 19:57:07.732744 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732761 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732775 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732788 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.732802 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.732815 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732828 | orchestrator | 2025-05-19 19:57:07.732842 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-19 19:57:07.732857 | orchestrator | Monday 19 May 2025 19:54:22 +0000 (0:00:03.105) 0:02:36.558 ************ 2025-05-19 19:57:07.732871 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.732884 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.732898 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.732911 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.732934 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.732947 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.732959 | orchestrator | 2025-05-19 19:57:07.732972 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-19 19:57:07.732986 | orchestrator | Monday 19 May 2025 19:54:25 +0000 (0:00:03.562) 0:02:40.121 ************ 2025-05-19 19:57:07.732996 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.733004 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.733012 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.733019 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.733027 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.733035 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.733042 | orchestrator | 2025-05-19 19:57:07.733050 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-19 19:57:07.733058 | orchestrator | Monday 19 May 2025 19:54:31 +0000 (0:00:05.996) 0:02:46.117 ************ 2025-05-19 19:57:07.733066 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.733073 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.733081 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.733089 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.733096 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.733104 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.733112 | orchestrator | 2025-05-19 19:57:07.733120 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-19 19:57:07.733128 | orchestrator | Monday 19 May 2025 19:54:35 +0000 (0:00:03.562) 0:02:49.680 ************ 2025-05-19 19:57:07.733136 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.733143 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.733151 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.733159 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.733166 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.733174 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.733182 | orchestrator | 2025-05-19 19:57:07.733189 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-19 19:57:07.733201 | orchestrator | Monday 19 May 2025 19:54:38 +0000 (0:00:03.615) 0:02:53.295 ************ 2025-05-19 19:57:07.733214 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 19:57:07.733229 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.733241 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 19:57:07.733256 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.733287 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 19:57:07.733308 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.733316 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 19:57:07.733324 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.733332 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 19:57:07.733340 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.733348 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 19:57:07.733356 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.733363 | orchestrator | 2025-05-19 19:57:07.733371 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-19 19:57:07.733379 | orchestrator | Monday 19 May 2025 19:54:41 +0000 (0:00:03.075) 0:02:56.370 ************ 2025-05-19 19:57:07.733388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.733434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.733475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.733520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.733530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.733551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.733573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.733605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.733623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.733635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.733679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.733721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733734 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.733743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.733751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.733781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.733799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.733825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.733833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.733874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.733882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733890 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.733903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.733916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.733948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.733985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.733994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.734121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.734166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.734369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.734416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.734464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.734487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.734495 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.734525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.734559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734576 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.734685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.734755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.734768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.734800 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.734825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.734846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.734872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.734897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734905 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.734917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.734928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.734981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.734988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.734995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735013 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.735060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.735085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735124 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.735130 | orchestrator | 2025-05-19 19:57:07.735137 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-19 19:57:07.735144 | orchestrator | Monday 19 May 2025 19:54:45 +0000 (0:00:03.354) 0:02:59.726 ************ 2025-05-19 19:57:07.735151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.735159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.735225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.735362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.735388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.735419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.735461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.735476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.735541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.735548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.735627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.735689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.735714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 19:57:07.735746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.735784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.735845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.735870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.735882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 19:57:07.735899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.735936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 19:57:07.735947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.735961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.735968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.735991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.735998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.736017 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.736024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.736040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736047 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.736071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.736078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736094 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.736101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.736108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 19:57:07.736126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:57:07.736151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 19:57:07.736158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 19:57:07.736176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 19:57:07.736183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 19:57:07.736195 | orchestrator | 2025-05-19 19:57:07.736207 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 19:57:07.736219 | orchestrator | Monday 19 May 2025 19:54:50 +0000 (0:00:04.754) 0:03:04.480 ************ 2025-05-19 19:57:07.736230 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:57:07.736241 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:57:07.736252 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:57:07.736285 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:57:07.736293 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:57:07.736299 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:57:07.736306 | orchestrator | 2025-05-19 19:57:07.736312 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-19 19:57:07.736319 | orchestrator | Monday 19 May 2025 19:54:51 +0000 (0:00:01.026) 0:03:05.507 ************ 2025-05-19 19:57:07.736325 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:57:07.736332 | orchestrator | 2025-05-19 19:57:07.736339 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-19 19:57:07.736345 | orchestrator | Monday 19 May 2025 19:54:53 +0000 (0:00:02.746) 0:03:08.253 ************ 2025-05-19 19:57:07.736352 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:57:07.736358 | orchestrator | 2025-05-19 19:57:07.736365 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-19 19:57:07.736375 | orchestrator | Monday 19 May 2025 19:54:56 +0000 (0:00:02.449) 0:03:10.703 ************ 2025-05-19 19:57:07.736382 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:57:07.736389 | orchestrator | 2025-05-19 19:57:07.736395 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 19:57:07.736402 | orchestrator | Monday 19 May 2025 19:55:44 +0000 (0:00:48.686) 0:03:59.389 ************ 2025-05-19 19:57:07.736408 | orchestrator | 2025-05-19 19:57:07.736415 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 19:57:07.736421 | orchestrator | Monday 19 May 2025 19:55:44 +0000 (0:00:00.058) 0:03:59.448 ************ 2025-05-19 19:57:07.736428 | orchestrator | 2025-05-19 19:57:07.736434 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 19:57:07.736441 | orchestrator | Monday 19 May 2025 19:55:45 +0000 (0:00:00.357) 0:03:59.806 ************ 2025-05-19 19:57:07.736447 | orchestrator | 2025-05-19 19:57:07.736454 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 19:57:07.736460 | orchestrator | Monday 19 May 2025 19:55:45 +0000 (0:00:00.064) 0:03:59.870 ************ 2025-05-19 19:57:07.736467 | orchestrator | 2025-05-19 19:57:07.736473 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 19:57:07.736480 | orchestrator | Monday 19 May 2025 19:55:45 +0000 (0:00:00.057) 0:03:59.928 ************ 2025-05-19 19:57:07.736486 | orchestrator | 2025-05-19 19:57:07.736493 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 19:57:07.736499 | orchestrator | Monday 19 May 2025 19:55:45 +0000 (0:00:00.065) 0:03:59.993 ************ 2025-05-19 19:57:07.736506 | orchestrator | 2025-05-19 19:57:07.736512 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-19 19:57:07.736519 | orchestrator | Monday 19 May 2025 19:55:45 +0000 (0:00:00.405) 0:04:00.399 ************ 2025-05-19 19:57:07.736525 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:57:07.736532 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:57:07.736539 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:57:07.736545 | orchestrator | 2025-05-19 19:57:07.736552 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-19 19:57:07.736558 | orchestrator | Monday 19 May 2025 19:56:14 +0000 (0:00:28.225) 0:04:28.624 ************ 2025-05-19 19:57:07.736565 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:57:07.736576 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:57:07.736583 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:57:07.736590 | orchestrator | 2025-05-19 19:57:07.736596 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:57:07.736608 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 19:57:07.736616 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-19 19:57:07.736623 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-19 19:57:07.736630 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 19:57:07.736636 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 19:57:07.736643 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 19:57:07.736649 | orchestrator | 2025-05-19 19:57:07.736656 | orchestrator | 2025-05-19 19:57:07.736662 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:57:07.736669 | orchestrator | Monday 19 May 2025 19:57:05 +0000 (0:00:51.216) 0:05:19.841 ************ 2025-05-19 19:57:07.736676 | orchestrator | =============================================================================== 2025-05-19 19:57:07.736682 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 51.22s 2025-05-19 19:57:07.736689 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 48.69s 2025-05-19 19:57:07.736695 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.23s 2025-05-19 19:57:07.736702 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.38s 2025-05-19 19:57:07.736708 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 8.12s 2025-05-19 19:57:07.736715 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.00s 2025-05-19 19:57:07.736721 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 7.30s 2025-05-19 19:57:07.736728 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.66s 2025-05-19 19:57:07.736734 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 6.35s 2025-05-19 19:57:07.736741 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 6.00s 2025-05-19 19:57:07.736747 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.41s 2025-05-19 19:57:07.736754 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 5.19s 2025-05-19 19:57:07.736760 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.75s 2025-05-19 19:57:07.736767 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.50s 2025-05-19 19:57:07.736776 | orchestrator | Setting sysctl values --------------------------------------------------- 4.38s 2025-05-19 19:57:07.736783 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.18s 2025-05-19 19:57:07.736790 | orchestrator | Load and persist kernel modules ----------------------------------------- 4.09s 2025-05-19 19:57:07.736796 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 4.06s 2025-05-19 19:57:07.736803 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.92s 2025-05-19 19:57:07.736809 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.82s 2025-05-19 19:57:07.736816 | orchestrator | 2025-05-19 19:57:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:07.736828 | orchestrator | 2025-05-19 19:57:07 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:07.736835 | orchestrator | 2025-05-19 19:57:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:10.728145 | orchestrator | 2025-05-19 19:57:10 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:10.728367 | orchestrator | 2025-05-19 19:57:10 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:10.728639 | orchestrator | 2025-05-19 19:57:10 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:10.729238 | orchestrator | 2025-05-19 19:57:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:10.730987 | orchestrator | 2025-05-19 19:57:10 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:10.731066 | orchestrator | 2025-05-19 19:57:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:13.754980 | orchestrator | 2025-05-19 19:57:13 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:13.755097 | orchestrator | 2025-05-19 19:57:13 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:13.755484 | orchestrator | 2025-05-19 19:57:13 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:13.756003 | orchestrator | 2025-05-19 19:57:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:13.756587 | orchestrator | 2025-05-19 19:57:13 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:13.756609 | orchestrator | 2025-05-19 19:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:16.796099 | orchestrator | 2025-05-19 19:57:16 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:16.796218 | orchestrator | 2025-05-19 19:57:16 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:16.798319 | orchestrator | 2025-05-19 19:57:16 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:16.801828 | orchestrator | 2025-05-19 19:57:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:16.804761 | orchestrator | 2025-05-19 19:57:16 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:16.804807 | orchestrator | 2025-05-19 19:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:19.837083 | orchestrator | 2025-05-19 19:57:19 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:19.838326 | orchestrator | 2025-05-19 19:57:19 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:19.838891 | orchestrator | 2025-05-19 19:57:19 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:19.839759 | orchestrator | 2025-05-19 19:57:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:19.840077 | orchestrator | 2025-05-19 19:57:19 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:19.840272 | orchestrator | 2025-05-19 19:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:22.870378 | orchestrator | 2025-05-19 19:57:22 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:22.870516 | orchestrator | 2025-05-19 19:57:22 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:22.870881 | orchestrator | 2025-05-19 19:57:22 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:22.872441 | orchestrator | 2025-05-19 19:57:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:22.873594 | orchestrator | 2025-05-19 19:57:22 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:22.873642 | orchestrator | 2025-05-19 19:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:25.906890 | orchestrator | 2025-05-19 19:57:25 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:25.907121 | orchestrator | 2025-05-19 19:57:25 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:25.913670 | orchestrator | 2025-05-19 19:57:25 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:25.913766 | orchestrator | 2025-05-19 19:57:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:25.913783 | orchestrator | 2025-05-19 19:57:25 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:25.913802 | orchestrator | 2025-05-19 19:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:28.934812 | orchestrator | 2025-05-19 19:57:28 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:28.937670 | orchestrator | 2025-05-19 19:57:28 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:28.937718 | orchestrator | 2025-05-19 19:57:28 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:28.937727 | orchestrator | 2025-05-19 19:57:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:28.937735 | orchestrator | 2025-05-19 19:57:28 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:28.937743 | orchestrator | 2025-05-19 19:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:31.970424 | orchestrator | 2025-05-19 19:57:31 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:31.972074 | orchestrator | 2025-05-19 19:57:31 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:31.972578 | orchestrator | 2025-05-19 19:57:31 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:31.973221 | orchestrator | 2025-05-19 19:57:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:31.974428 | orchestrator | 2025-05-19 19:57:31 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:31.974463 | orchestrator | 2025-05-19 19:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:35.002140 | orchestrator | 2025-05-19 19:57:35 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:35.005999 | orchestrator | 2025-05-19 19:57:35 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:35.006748 | orchestrator | 2025-05-19 19:57:35 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:35.006975 | orchestrator | 2025-05-19 19:57:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:35.008407 | orchestrator | 2025-05-19 19:57:35 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:35.008434 | orchestrator | 2025-05-19 19:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:38.037924 | orchestrator | 2025-05-19 19:57:38 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:38.039370 | orchestrator | 2025-05-19 19:57:38 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:38.039900 | orchestrator | 2025-05-19 19:57:38 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:38.040506 | orchestrator | 2025-05-19 19:57:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:38.041086 | orchestrator | 2025-05-19 19:57:38 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:38.041129 | orchestrator | 2025-05-19 19:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:41.072150 | orchestrator | 2025-05-19 19:57:41 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:41.073031 | orchestrator | 2025-05-19 19:57:41 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:41.073779 | orchestrator | 2025-05-19 19:57:41 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:41.074785 | orchestrator | 2025-05-19 19:57:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:41.075208 | orchestrator | 2025-05-19 19:57:41 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:41.075411 | orchestrator | 2025-05-19 19:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:44.101364 | orchestrator | 2025-05-19 19:57:44 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:44.104914 | orchestrator | 2025-05-19 19:57:44 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:44.105634 | orchestrator | 2025-05-19 19:57:44 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:44.106418 | orchestrator | 2025-05-19 19:57:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:44.107417 | orchestrator | 2025-05-19 19:57:44 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:44.107479 | orchestrator | 2025-05-19 19:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:47.140684 | orchestrator | 2025-05-19 19:57:47 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:47.140836 | orchestrator | 2025-05-19 19:57:47 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:47.140869 | orchestrator | 2025-05-19 19:57:47 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:47.141843 | orchestrator | 2025-05-19 19:57:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:47.142829 | orchestrator | 2025-05-19 19:57:47 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:47.142995 | orchestrator | 2025-05-19 19:57:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:50.192712 | orchestrator | 2025-05-19 19:57:50 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:50.194452 | orchestrator | 2025-05-19 19:57:50 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:50.195075 | orchestrator | 2025-05-19 19:57:50 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:50.195965 | orchestrator | 2025-05-19 19:57:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:50.196744 | orchestrator | 2025-05-19 19:57:50 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:50.196813 | orchestrator | 2025-05-19 19:57:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:53.252036 | orchestrator | 2025-05-19 19:57:53 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:53.252481 | orchestrator | 2025-05-19 19:57:53 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:53.252513 | orchestrator | 2025-05-19 19:57:53 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:53.253220 | orchestrator | 2025-05-19 19:57:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:53.253857 | orchestrator | 2025-05-19 19:57:53 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:53.253955 | orchestrator | 2025-05-19 19:57:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:56.292511 | orchestrator | 2025-05-19 19:57:56 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:56.292803 | orchestrator | 2025-05-19 19:57:56 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:56.293498 | orchestrator | 2025-05-19 19:57:56 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:56.294065 | orchestrator | 2025-05-19 19:57:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:56.294730 | orchestrator | 2025-05-19 19:57:56 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:56.294760 | orchestrator | 2025-05-19 19:57:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:57:59.321881 | orchestrator | 2025-05-19 19:57:59 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:57:59.322117 | orchestrator | 2025-05-19 19:57:59 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:57:59.322807 | orchestrator | 2025-05-19 19:57:59 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:57:59.323566 | orchestrator | 2025-05-19 19:57:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:57:59.323919 | orchestrator | 2025-05-19 19:57:59 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:57:59.325313 | orchestrator | 2025-05-19 19:57:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:02.368094 | orchestrator | 2025-05-19 19:58:02 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:02.368246 | orchestrator | 2025-05-19 19:58:02 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:02.370391 | orchestrator | 2025-05-19 19:58:02 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:02.371066 | orchestrator | 2025-05-19 19:58:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:02.371805 | orchestrator | 2025-05-19 19:58:02 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:02.371962 | orchestrator | 2025-05-19 19:58:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:05.426699 | orchestrator | 2025-05-19 19:58:05 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:05.430210 | orchestrator | 2025-05-19 19:58:05 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:05.433062 | orchestrator | 2025-05-19 19:58:05 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:05.435310 | orchestrator | 2025-05-19 19:58:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:05.436860 | orchestrator | 2025-05-19 19:58:05 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:05.436897 | orchestrator | 2025-05-19 19:58:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:08.504686 | orchestrator | 2025-05-19 19:58:08 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:08.505024 | orchestrator | 2025-05-19 19:58:08 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:08.507273 | orchestrator | 2025-05-19 19:58:08 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:08.507812 | orchestrator | 2025-05-19 19:58:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:08.511826 | orchestrator | 2025-05-19 19:58:08 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:08.512407 | orchestrator | 2025-05-19 19:58:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:11.566909 | orchestrator | 2025-05-19 19:58:11 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:11.567003 | orchestrator | 2025-05-19 19:58:11 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:11.567953 | orchestrator | 2025-05-19 19:58:11 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:11.568996 | orchestrator | 2025-05-19 19:58:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:11.569612 | orchestrator | 2025-05-19 19:58:11 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:11.569984 | orchestrator | 2025-05-19 19:58:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:14.620009 | orchestrator | 2025-05-19 19:58:14 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:14.621825 | orchestrator | 2025-05-19 19:58:14 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:14.623785 | orchestrator | 2025-05-19 19:58:14 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:14.626186 | orchestrator | 2025-05-19 19:58:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:14.628446 | orchestrator | 2025-05-19 19:58:14 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:14.628505 | orchestrator | 2025-05-19 19:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:17.681551 | orchestrator | 2025-05-19 19:58:17 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:17.682744 | orchestrator | 2025-05-19 19:58:17 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:17.685000 | orchestrator | 2025-05-19 19:58:17 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:17.686712 | orchestrator | 2025-05-19 19:58:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:17.688091 | orchestrator | 2025-05-19 19:58:17 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:17.688123 | orchestrator | 2025-05-19 19:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:20.745256 | orchestrator | 2025-05-19 19:58:20 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:20.750434 | orchestrator | 2025-05-19 19:58:20 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:20.751782 | orchestrator | 2025-05-19 19:58:20 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:20.754738 | orchestrator | 2025-05-19 19:58:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:20.759876 | orchestrator | 2025-05-19 19:58:20 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:20.759930 | orchestrator | 2025-05-19 19:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:23.812712 | orchestrator | 2025-05-19 19:58:23 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:23.812866 | orchestrator | 2025-05-19 19:58:23 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:23.814118 | orchestrator | 2025-05-19 19:58:23 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:23.817050 | orchestrator | 2025-05-19 19:58:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:23.819598 | orchestrator | 2025-05-19 19:58:23 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:23.819650 | orchestrator | 2025-05-19 19:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:26.864008 | orchestrator | 2025-05-19 19:58:26 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:26.865181 | orchestrator | 2025-05-19 19:58:26 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:26.867775 | orchestrator | 2025-05-19 19:58:26 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:26.867848 | orchestrator | 2025-05-19 19:58:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:26.868981 | orchestrator | 2025-05-19 19:58:26 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:26.869029 | orchestrator | 2025-05-19 19:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:29.915469 | orchestrator | 2025-05-19 19:58:29 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:29.918054 | orchestrator | 2025-05-19 19:58:29 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:29.919900 | orchestrator | 2025-05-19 19:58:29 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:29.921584 | orchestrator | 2025-05-19 19:58:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:29.923188 | orchestrator | 2025-05-19 19:58:29 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:29.923215 | orchestrator | 2025-05-19 19:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:32.970921 | orchestrator | 2025-05-19 19:58:32 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:32.971720 | orchestrator | 2025-05-19 19:58:32 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:32.972737 | orchestrator | 2025-05-19 19:58:32 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:32.974334 | orchestrator | 2025-05-19 19:58:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:32.975782 | orchestrator | 2025-05-19 19:58:32 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:32.975829 | orchestrator | 2025-05-19 19:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:36.040954 | orchestrator | 2025-05-19 19:58:36 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:36.042955 | orchestrator | 2025-05-19 19:58:36 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:36.043072 | orchestrator | 2025-05-19 19:58:36 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:36.043088 | orchestrator | 2025-05-19 19:58:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:36.044286 | orchestrator | 2025-05-19 19:58:36 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:36.044338 | orchestrator | 2025-05-19 19:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:39.082231 | orchestrator | 2025-05-19 19:58:39 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:39.083277 | orchestrator | 2025-05-19 19:58:39 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:39.084923 | orchestrator | 2025-05-19 19:58:39 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:39.087050 | orchestrator | 2025-05-19 19:58:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:39.087848 | orchestrator | 2025-05-19 19:58:39 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:39.087879 | orchestrator | 2025-05-19 19:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:42.131715 | orchestrator | 2025-05-19 19:58:42 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:42.131800 | orchestrator | 2025-05-19 19:58:42 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:42.133634 | orchestrator | 2025-05-19 19:58:42 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:42.134682 | orchestrator | 2025-05-19 19:58:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:42.136368 | orchestrator | 2025-05-19 19:58:42 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:42.136484 | orchestrator | 2025-05-19 19:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:45.184438 | orchestrator | 2025-05-19 19:58:45 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:45.184521 | orchestrator | 2025-05-19 19:58:45 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:45.184561 | orchestrator | 2025-05-19 19:58:45 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:45.184942 | orchestrator | 2025-05-19 19:58:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:45.185537 | orchestrator | 2025-05-19 19:58:45 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:45.185566 | orchestrator | 2025-05-19 19:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:48.213781 | orchestrator | 2025-05-19 19:58:48 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:48.214519 | orchestrator | 2025-05-19 19:58:48 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:48.214905 | orchestrator | 2025-05-19 19:58:48 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:48.215502 | orchestrator | 2025-05-19 19:58:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:48.216189 | orchestrator | 2025-05-19 19:58:48 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:48.216218 | orchestrator | 2025-05-19 19:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:51.252921 | orchestrator | 2025-05-19 19:58:51 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:51.253839 | orchestrator | 2025-05-19 19:58:51 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:51.253914 | orchestrator | 2025-05-19 19:58:51 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:51.254468 | orchestrator | 2025-05-19 19:58:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:51.255652 | orchestrator | 2025-05-19 19:58:51 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:51.255683 | orchestrator | 2025-05-19 19:58:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:54.291261 | orchestrator | 2025-05-19 19:58:54 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:54.292086 | orchestrator | 2025-05-19 19:58:54 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:54.292662 | orchestrator | 2025-05-19 19:58:54 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:54.293866 | orchestrator | 2025-05-19 19:58:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:54.294381 | orchestrator | 2025-05-19 19:58:54 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:54.294416 | orchestrator | 2025-05-19 19:58:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:58:57.324521 | orchestrator | 2025-05-19 19:58:57 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:58:57.325348 | orchestrator | 2025-05-19 19:58:57 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:58:57.326239 | orchestrator | 2025-05-19 19:58:57 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:58:57.327027 | orchestrator | 2025-05-19 19:58:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:58:57.328139 | orchestrator | 2025-05-19 19:58:57 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:58:57.328184 | orchestrator | 2025-05-19 19:58:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:00.365041 | orchestrator | 2025-05-19 19:59:00 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:00.365196 | orchestrator | 2025-05-19 19:59:00 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:00.365204 | orchestrator | 2025-05-19 19:59:00 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:59:00.365638 | orchestrator | 2025-05-19 19:59:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:00.366636 | orchestrator | 2025-05-19 19:59:00 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state STARTED 2025-05-19 19:59:00.366664 | orchestrator | 2025-05-19 19:59:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:03.399258 | orchestrator | 2025-05-19 19:59:03 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:03.399915 | orchestrator | 2025-05-19 19:59:03 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:03.401269 | orchestrator | 2025-05-19 19:59:03 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:59:03.402352 | orchestrator | 2025-05-19 19:59:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:03.406688 | orchestrator | 2025-05-19 19:59:03 | INFO  | Task 4cfbf18e-1b45-4985-8c78-390246ab151e is in state SUCCESS 2025-05-19 19:59:03.407588 | orchestrator | 2025-05-19 19:59:03.407624 | orchestrator | 2025-05-19 19:59:03.407645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:59:03.407665 | orchestrator | 2025-05-19 19:59:03.407682 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:59:03.407700 | orchestrator | Monday 19 May 2025 19:54:19 +0000 (0:00:00.327) 0:00:00.327 ************ 2025-05-19 19:59:03.407718 | orchestrator | ok: [testbed-manager] 2025-05-19 19:59:03.407738 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:59:03.407754 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:59:03.407799 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:59:03.407821 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:59:03.407839 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:59:03.407924 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:59:03.407939 | orchestrator | 2025-05-19 19:59:03.407950 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:59:03.407961 | orchestrator | Monday 19 May 2025 19:54:20 +0000 (0:00:01.271) 0:00:01.598 ************ 2025-05-19 19:59:03.407974 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-19 19:59:03.407985 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-19 19:59:03.407996 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-19 19:59:03.408007 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-19 19:59:03.408018 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-19 19:59:03.408029 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-19 19:59:03.408119 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-19 19:59:03.408133 | orchestrator | 2025-05-19 19:59:03.408144 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-19 19:59:03.408155 | orchestrator | 2025-05-19 19:59:03.408166 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-19 19:59:03.408177 | orchestrator | Monday 19 May 2025 19:54:22 +0000 (0:00:01.736) 0:00:03.335 ************ 2025-05-19 19:59:03.408190 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:59:03.408205 | orchestrator | 2025-05-19 19:59:03.408317 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-19 19:59:03.408333 | orchestrator | Monday 19 May 2025 19:54:25 +0000 (0:00:03.075) 0:00:06.411 ************ 2025-05-19 19:59:03.408352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.408373 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:59:03.408409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.408438 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408485 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.408520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.408551 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.408563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.408582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.408593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.408634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.408665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:59:03.408727 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.408746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.408758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.408787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.408804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.408835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.408862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408874 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.408903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408945 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.408956 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.408974 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.408987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.409004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.409030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.409205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.409218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.409239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.409257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.409319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.409331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.409343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.410658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.410696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.410732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.410747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.410761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.410785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.410797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.410956 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.410986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.410998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.411032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.411198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411211 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.411286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.411319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.411380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.411417 | orchestrator | 2025-05-19 19:59:03.411431 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-19 19:59:03.411445 | orchestrator | Monday 19 May 2025 19:54:29 +0000 (0:00:04.427) 0:00:10.838 ************ 2025-05-19 19:59:03.411459 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:59:03.411473 | orchestrator | 2025-05-19 19:59:03.411485 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-19 19:59:03.411497 | orchestrator | Monday 19 May 2025 19:54:31 +0000 (0:00:02.250) 0:00:13.089 ************ 2025-05-19 19:59:03.411515 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:59:03.411530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411592 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.411632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411654 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411704 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411767 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:59:03.411784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411873 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.411941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.411971 | orchestrator | 2025-05-19 19:59:03.411981 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-19 19:59:03.411991 | orchestrator | Monday 19 May 2025 19:54:39 +0000 (0:00:07.533) 0:00:20.622 ************ 2025-05-19 19:59:03.412005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412119 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.412137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.412148 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412164 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412176 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.412186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412203 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.412214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412275 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.412285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412347 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.412357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412391 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.412401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412437 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.412453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412483 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.412493 | orchestrator | 2025-05-19 19:59:03.412503 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-19 19:59:03.412517 | orchestrator | Monday 19 May 2025 19:54:42 +0000 (0:00:02.754) 0:00:23.377 ************ 2025-05-19 19:59:03.412527 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.412543 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412553 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412570 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.412581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412724 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.412734 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.412744 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.412754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.412828 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.412843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.412911 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.412927 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.412944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.413560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.413591 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.413601 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 19:59:03.413612 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.413631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.413658 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.413668 | orchestrator | 2025-05-19 19:59:03.413678 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-19 19:59:03.413688 | orchestrator | Monday 19 May 2025 19:54:46 +0000 (0:00:04.248) 0:00:27.625 ************ 2025-05-19 19:59:03.413698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.413709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.413753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.413765 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:59:03.413781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.413798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.413809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.413819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.413853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.413865 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.413881 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.413896 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.413906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.413916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.413926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.413936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.413981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.414148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.414272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.414367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.414378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414441 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:59:03.414456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.414478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.414514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414558 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.414568 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.414645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.414660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414694 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414783 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.414805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.414830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414863 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.414874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.414936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.414952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.414963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.414974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.414996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.415040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.415058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.415068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.415143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.415173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.415226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.415252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.415273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.415328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.415350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.415360 | orchestrator | 2025-05-19 19:59:03.415370 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-19 19:59:03.415380 | orchestrator | Monday 19 May 2025 19:54:54 +0000 (0:00:07.890) 0:00:35.516 ************ 2025-05-19 19:59:03.415390 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:59:03.415400 | orchestrator | 2025-05-19 19:59:03.415410 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-19 19:59:03.415420 | orchestrator | Monday 19 May 2025 19:54:54 +0000 (0:00:00.574) 0:00:36.090 ************ 2025-05-19 19:59:03.415434 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415445 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415455 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415471 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415509 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415520 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415531 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415546 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1077384, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2667003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.415583 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415640 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415680 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415690 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415729 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415744 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415758 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415775 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415788 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415810 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415825 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415859 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1077393, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.415869 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415877 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415890 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415898 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415912 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415920 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415952 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415969 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415982 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.415995 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416004 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416013 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416043 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416052 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416060 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416073 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416116 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416127 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1077388, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416135 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416167 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416176 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416202 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416217 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416226 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416234 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416264 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416273 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416282 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416301 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416309 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416317 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416325 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416354 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416364 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416372 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1077391, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416391 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416399 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.416407 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416415 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.416423 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416431 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.416440 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416468 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416478 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.416486 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416499 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.416507 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 19:59:03.416515 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.416527 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1077408, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416536 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1077395, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416544 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1077390, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1077394, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2687004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416583 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1077407, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2717004, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416592 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1077389, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2677002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416606 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1077399, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2697003, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 19:59:03.416614 | orchestrator | 2025-05-19 19:59:03.416622 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-19 19:59:03.416631 | orchestrator | Monday 19 May 2025 19:55:33 +0000 (0:00:38.608) 0:01:14.698 ************ 2025-05-19 19:59:03.416643 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:59:03.416651 | orchestrator | 2025-05-19 19:59:03.416658 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-19 19:59:03.416666 | orchestrator | Monday 19 May 2025 19:55:33 +0000 (0:00:00.430) 0:01:15.129 ************ 2025-05-19 19:59:03.416675 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416683 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416691 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.416699 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416711 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.416725 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:59:03.416738 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416751 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416766 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.416779 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416792 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.416805 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 19:59:03.416816 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416832 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.416840 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416848 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.416856 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416871 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.416879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416887 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.416895 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416910 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.416918 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416926 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.416934 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416948 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416955 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.416963 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416971 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.416979 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.416987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.416995 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-19 19:59:03.417003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 19:59:03.417037 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-19 19:59:03.417046 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 19:59:03.417054 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 19:59:03.417062 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 19:59:03.417070 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 19:59:03.417095 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 19:59:03.417104 | orchestrator | 2025-05-19 19:59:03.417112 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-19 19:59:03.417120 | orchestrator | Monday 19 May 2025 19:55:35 +0000 (0:00:01.403) 0:01:16.532 ************ 2025-05-19 19:59:03.417128 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 19:59:03.417136 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.417144 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 19:59:03.417152 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417160 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 19:59:03.417167 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.417175 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 19:59:03.417184 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.417191 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 19:59:03.417199 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417207 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 19:59:03.417215 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417223 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-19 19:59:03.417230 | orchestrator | 2025-05-19 19:59:03.417238 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-19 19:59:03.417246 | orchestrator | Monday 19 May 2025 19:55:54 +0000 (0:00:19.350) 0:01:35.883 ************ 2025-05-19 19:59:03.417259 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 19:59:03.417267 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.417275 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 19:59:03.417283 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.417290 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 19:59:03.417298 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.417306 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 19:59:03.417314 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417321 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 19:59:03.417329 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417343 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 19:59:03.417351 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417358 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-19 19:59:03.417366 | orchestrator | 2025-05-19 19:59:03.417374 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-19 19:59:03.417382 | orchestrator | Monday 19 May 2025 19:56:00 +0000 (0:00:05.386) 0:01:41.269 ************ 2025-05-19 19:59:03.417390 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 19:59:03.417398 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.417406 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 19:59:03.417414 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.417422 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 19:59:03.417430 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.417438 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 19:59:03.417446 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 19:59:03.417454 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417462 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417469 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 19:59:03.417477 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417485 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-19 19:59:03.417493 | orchestrator | 2025-05-19 19:59:03.417501 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-19 19:59:03.417509 | orchestrator | Monday 19 May 2025 19:56:05 +0000 (0:00:05.219) 0:01:46.489 ************ 2025-05-19 19:59:03.417517 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:59:03.417525 | orchestrator | 2025-05-19 19:59:03.417536 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-19 19:59:03.417544 | orchestrator | Monday 19 May 2025 19:56:06 +0000 (0:00:00.715) 0:01:47.205 ************ 2025-05-19 19:59:03.417552 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.417560 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.417568 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.417575 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.417583 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417591 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417598 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417606 | orchestrator | 2025-05-19 19:59:03.417614 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-19 19:59:03.417622 | orchestrator | Monday 19 May 2025 19:56:06 +0000 (0:00:00.826) 0:01:48.032 ************ 2025-05-19 19:59:03.417630 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.417637 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417645 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417653 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417660 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:03.417668 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:03.417676 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:03.417683 | orchestrator | 2025-05-19 19:59:03.417691 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-19 19:59:03.417706 | orchestrator | Monday 19 May 2025 19:56:11 +0000 (0:00:04.904) 0:01:52.937 ************ 2025-05-19 19:59:03.417714 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417722 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417730 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.417737 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.417745 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417753 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.417761 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417769 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417780 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417788 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417796 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417806 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417819 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 19:59:03.417832 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.417844 | orchestrator | 2025-05-19 19:59:03.417856 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-19 19:59:03.417869 | orchestrator | Monday 19 May 2025 19:56:15 +0000 (0:00:03.267) 0:01:56.204 ************ 2025-05-19 19:59:03.417881 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 19:59:03.417894 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.417906 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 19:59:03.417921 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.417929 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 19:59:03.417937 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.417945 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 19:59:03.417952 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.417960 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 19:59:03.417968 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.417976 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 19:59:03.417984 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.417992 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-19 19:59:03.417999 | orchestrator | 2025-05-19 19:59:03.418007 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-19 19:59:03.418040 | orchestrator | Monday 19 May 2025 19:56:19 +0000 (0:00:04.809) 0:02:01.013 ************ 2025-05-19 19:59:03.418050 | orchestrator | [WARNING]: Skipped 2025-05-19 19:59:03.418058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-19 19:59:03.418066 | orchestrator | due to this access issue: 2025-05-19 19:59:03.418074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-19 19:59:03.418132 | orchestrator | not a directory 2025-05-19 19:59:03.418140 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 19:59:03.418148 | orchestrator | 2025-05-19 19:59:03.418156 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-19 19:59:03.418164 | orchestrator | Monday 19 May 2025 19:56:22 +0000 (0:00:02.617) 0:02:03.631 ************ 2025-05-19 19:59:03.418182 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.418190 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.418198 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.418206 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.418214 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.418221 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.418237 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.418245 | orchestrator | 2025-05-19 19:59:03.418253 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-19 19:59:03.418261 | orchestrator | Monday 19 May 2025 19:56:24 +0000 (0:00:01.810) 0:02:05.442 ************ 2025-05-19 19:59:03.418268 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.418276 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.418284 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.418292 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.418299 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.418307 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.418315 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.418322 | orchestrator | 2025-05-19 19:59:03.418329 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-19 19:59:03.418335 | orchestrator | Monday 19 May 2025 19:56:25 +0000 (0:00:01.323) 0:02:06.766 ************ 2025-05-19 19:59:03.418342 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418349 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.418355 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418362 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.418369 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418375 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.418382 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418388 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.418395 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418401 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.418408 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418415 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.418426 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-19 19:59:03.418432 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.418439 | orchestrator | 2025-05-19 19:59:03.418446 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-19 19:59:03.418453 | orchestrator | Monday 19 May 2025 19:56:29 +0000 (0:00:03.621) 0:02:10.387 ************ 2025-05-19 19:59:03.418459 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418466 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418472 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:03.418479 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:03.418486 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418492 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:03.418499 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418506 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:03.418512 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418523 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:03.418530 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418536 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:03.418543 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-19 19:59:03.418550 | orchestrator | skipping: [testbed-manager] 2025-05-19 19:59:03.418556 | orchestrator | 2025-05-19 19:59:03.418563 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-19 19:59:03.418570 | orchestrator | Monday 19 May 2025 19:56:33 +0000 (0:00:04.764) 0:02:15.151 ************ 2025-05-19 19:59:03.418577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.418590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.418597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.418609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.418621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.418628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 19:59:03.418640 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 19:59:03.418647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418723 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 19:59:03.418746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418760 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.418785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.418792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.418807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.418815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.418822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418840 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.418847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.418861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.418868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.418876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.418886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418900 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.418912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.418943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.418951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418962 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.418969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 19:59:03.418987 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.418994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.419018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.419055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.419069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419109 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.419118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.419147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.419154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.419161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.419173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.419189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.419196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 19:59:03.419203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 19:59:03.419214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 19:59:03.419221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.419250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.419281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 19:59:03.419299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 19:59:03.419317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 19:59:03.419324 | orchestrator | 2025-05-19 19:59:03.419330 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-19 19:59:03.419337 | orchestrator | Monday 19 May 2025 19:56:39 +0000 (0:00:05.693) 0:02:20.845 ************ 2025-05-19 19:59:03.419344 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-19 19:59:03.419351 | orchestrator | 2025-05-19 19:59:03.419357 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419364 | orchestrator | Monday 19 May 2025 19:56:42 +0000 (0:00:03.224) 0:02:24.070 ************ 2025-05-19 19:59:03.419370 | orchestrator | 2025-05-19 19:59:03.419377 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419383 | orchestrator | Monday 19 May 2025 19:56:42 +0000 (0:00:00.057) 0:02:24.127 ************ 2025-05-19 19:59:03.419390 | orchestrator | 2025-05-19 19:59:03.419397 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419403 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:00.206) 0:02:24.334 ************ 2025-05-19 19:59:03.419410 | orchestrator | 2025-05-19 19:59:03.419417 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419423 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:00.051) 0:02:24.385 ************ 2025-05-19 19:59:03.419430 | orchestrator | 2025-05-19 19:59:03.419436 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419443 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:00.050) 0:02:24.436 ************ 2025-05-19 19:59:03.419450 | orchestrator | 2025-05-19 19:59:03.419456 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419463 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:00.055) 0:02:24.492 ************ 2025-05-19 19:59:03.419474 | orchestrator | 2025-05-19 19:59:03.419481 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 19:59:03.419487 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:00.189) 0:02:24.681 ************ 2025-05-19 19:59:03.419494 | orchestrator | 2025-05-19 19:59:03.419500 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-19 19:59:03.419507 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:00.055) 0:02:24.737 ************ 2025-05-19 19:59:03.419513 | orchestrator | changed: [testbed-manager] 2025-05-19 19:59:03.419520 | orchestrator | 2025-05-19 19:59:03.419526 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-19 19:59:03.419533 | orchestrator | Monday 19 May 2025 19:57:00 +0000 (0:00:16.744) 0:02:41.482 ************ 2025-05-19 19:59:03.419540 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:03.419546 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:03.419556 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:03.419563 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:59:03.419570 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:59:03.419576 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:59:03.419583 | orchestrator | changed: [testbed-manager] 2025-05-19 19:59:03.419589 | orchestrator | 2025-05-19 19:59:03.419596 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-19 19:59:03.419603 | orchestrator | Monday 19 May 2025 19:57:22 +0000 (0:00:21.980) 0:03:03.462 ************ 2025-05-19 19:59:03.419609 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:03.419616 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:03.419623 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:03.419629 | orchestrator | 2025-05-19 19:59:03.419636 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-19 19:59:03.419642 | orchestrator | Monday 19 May 2025 19:57:31 +0000 (0:00:09.138) 0:03:12.601 ************ 2025-05-19 19:59:03.419649 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:03.419656 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:03.419662 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:03.419669 | orchestrator | 2025-05-19 19:59:03.419675 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-19 19:59:03.419682 | orchestrator | Monday 19 May 2025 19:57:46 +0000 (0:00:15.481) 0:03:28.083 ************ 2025-05-19 19:59:03.419688 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:03.419695 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:03.419701 | orchestrator | changed: [testbed-manager] 2025-05-19 19:59:03.419708 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:59:03.419714 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:59:03.419721 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:03.419728 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:59:03.419734 | orchestrator | 2025-05-19 19:59:03.419741 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-19 19:59:03.419747 | orchestrator | Monday 19 May 2025 19:58:16 +0000 (0:00:29.250) 0:03:57.334 ************ 2025-05-19 19:59:03.419754 | orchestrator | changed: [testbed-manager] 2025-05-19 19:59:03.419761 | orchestrator | 2025-05-19 19:59:03.419767 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-19 19:59:03.419774 | orchestrator | Monday 19 May 2025 19:58:31 +0000 (0:00:15.532) 0:04:12.867 ************ 2025-05-19 19:59:03.419780 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:03.419791 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:03.419797 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:03.419804 | orchestrator | 2025-05-19 19:59:03.419810 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-19 19:59:03.419817 | orchestrator | Monday 19 May 2025 19:58:43 +0000 (0:00:11.738) 0:04:24.605 ************ 2025-05-19 19:59:03.419824 | orchestrator | changed: [testbed-manager] 2025-05-19 19:59:03.419830 | orchestrator | 2025-05-19 19:59:03.419837 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-19 19:59:03.419848 | orchestrator | Monday 19 May 2025 19:58:50 +0000 (0:00:06.979) 0:04:31.585 ************ 2025-05-19 19:59:03.419854 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:59:03.419861 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:59:03.419867 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:59:03.419874 | orchestrator | 2025-05-19 19:59:03.419881 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:59:03.419888 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 19:59:03.419895 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-19 19:59:03.419902 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-19 19:59:03.419909 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-19 19:59:03.419915 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-19 19:59:03.419922 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-19 19:59:03.419929 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-19 19:59:03.419935 | orchestrator | 2025-05-19 19:59:03.419942 | orchestrator | 2025-05-19 19:59:03.419949 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:59:03.419955 | orchestrator | Monday 19 May 2025 19:59:02 +0000 (0:00:12.488) 0:04:44.073 ************ 2025-05-19 19:59:03.419962 | orchestrator | =============================================================================== 2025-05-19 19:59:03.419968 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 38.61s 2025-05-19 19:59:03.419975 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 29.25s 2025-05-19 19:59:03.419981 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 21.98s 2025-05-19 19:59:03.419988 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 19.35s 2025-05-19 19:59:03.419994 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.75s 2025-05-19 19:59:03.420001 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 15.53s 2025-05-19 19:59:03.420010 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 15.48s 2025-05-19 19:59:03.420017 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.49s 2025-05-19 19:59:03.420024 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.74s 2025-05-19 19:59:03.420030 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.14s 2025-05-19 19:59:03.420037 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.89s 2025-05-19 19:59:03.420043 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.53s 2025-05-19 19:59:03.420050 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.98s 2025-05-19 19:59:03.420056 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.69s 2025-05-19 19:59:03.420063 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.39s 2025-05-19 19:59:03.420069 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 5.22s 2025-05-19 19:59:03.420090 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.90s 2025-05-19 19:59:03.420111 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.81s 2025-05-19 19:59:03.420124 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 4.76s 2025-05-19 19:59:03.420135 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.43s 2025-05-19 19:59:03.420146 | orchestrator | 2025-05-19 19:59:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:06.453607 | orchestrator | 2025-05-19 19:59:06 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:06.453990 | orchestrator | 2025-05-19 19:59:06 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:06.455108 | orchestrator | 2025-05-19 19:59:06 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:59:06.455937 | orchestrator | 2025-05-19 19:59:06 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:06.456987 | orchestrator | 2025-05-19 19:59:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:06.457023 | orchestrator | 2025-05-19 19:59:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:09.500674 | orchestrator | 2025-05-19 19:59:09 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:09.502419 | orchestrator | 2025-05-19 19:59:09 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:09.502487 | orchestrator | 2025-05-19 19:59:09 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state STARTED 2025-05-19 19:59:09.505898 | orchestrator | 2025-05-19 19:59:09 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:09.508774 | orchestrator | 2025-05-19 19:59:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:09.509125 | orchestrator | 2025-05-19 19:59:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:12.563825 | orchestrator | 2025-05-19 19:59:12 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:12.564813 | orchestrator | 2025-05-19 19:59:12 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:12.566945 | orchestrator | 2025-05-19 19:59:12 | INFO  | Task cd7fb752-37a6-4746-8447-6f456b02b485 is in state SUCCESS 2025-05-19 19:59:12.569603 | orchestrator | 2025-05-19 19:59:12.569658 | orchestrator | 2025-05-19 19:59:12.569664 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:59:12.569670 | orchestrator | 2025-05-19 19:59:12.569674 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:59:12.569679 | orchestrator | Monday 19 May 2025 19:55:28 +0000 (0:00:00.298) 0:00:00.298 ************ 2025-05-19 19:59:12.569683 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:59:12.569689 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:59:12.569694 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:59:12.569697 | orchestrator | 2025-05-19 19:59:12.569701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:59:12.569728 | orchestrator | Monday 19 May 2025 19:55:28 +0000 (0:00:00.394) 0:00:00.692 ************ 2025-05-19 19:59:12.569733 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-19 19:59:12.569739 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-19 19:59:12.569743 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-19 19:59:12.569747 | orchestrator | 2025-05-19 19:59:12.569751 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-19 19:59:12.569755 | orchestrator | 2025-05-19 19:59:12.569759 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 19:59:12.569788 | orchestrator | Monday 19 May 2025 19:55:29 +0000 (0:00:00.323) 0:00:01.016 ************ 2025-05-19 19:59:12.569810 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:59:12.569816 | orchestrator | 2025-05-19 19:59:12.569820 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-19 19:59:12.569825 | orchestrator | Monday 19 May 2025 19:55:30 +0000 (0:00:00.768) 0:00:01.784 ************ 2025-05-19 19:59:12.569829 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-19 19:59:12.569833 | orchestrator | 2025-05-19 19:59:12.569837 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-19 19:59:12.569841 | orchestrator | Monday 19 May 2025 19:55:33 +0000 (0:00:03.769) 0:00:05.554 ************ 2025-05-19 19:59:12.569845 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-19 19:59:12.569850 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-19 19:59:12.569854 | orchestrator | 2025-05-19 19:59:12.569858 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-19 19:59:12.569862 | orchestrator | Monday 19 May 2025 19:55:40 +0000 (0:00:07.001) 0:00:12.556 ************ 2025-05-19 19:59:12.569866 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 19:59:12.569871 | orchestrator | 2025-05-19 19:59:12.569875 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-19 19:59:12.569879 | orchestrator | Monday 19 May 2025 19:55:44 +0000 (0:00:03.941) 0:00:16.497 ************ 2025-05-19 19:59:12.569884 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:59:12.569888 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-19 19:59:12.569892 | orchestrator | 2025-05-19 19:59:12.569896 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-19 19:59:12.569900 | orchestrator | Monday 19 May 2025 19:55:48 +0000 (0:00:04.023) 0:00:20.521 ************ 2025-05-19 19:59:12.569904 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:59:12.569909 | orchestrator | 2025-05-19 19:59:12.569912 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-19 19:59:12.569916 | orchestrator | Monday 19 May 2025 19:55:52 +0000 (0:00:03.467) 0:00:23.989 ************ 2025-05-19 19:59:12.569921 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-19 19:59:12.569925 | orchestrator | 2025-05-19 19:59:12.570156 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-19 19:59:12.570176 | orchestrator | Monday 19 May 2025 19:55:57 +0000 (0:00:04.976) 0:00:28.965 ************ 2025-05-19 19:59:12.570196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570256 | orchestrator | 2025-05-19 19:59:12.570260 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 19:59:12.570264 | orchestrator | Monday 19 May 2025 19:56:03 +0000 (0:00:05.994) 0:00:34.960 ************ 2025-05-19 19:59:12.570268 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:59:12.570272 | orchestrator | 2025-05-19 19:59:12.570276 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-19 19:59:12.570280 | orchestrator | Monday 19 May 2025 19:56:03 +0000 (0:00:00.667) 0:00:35.627 ************ 2025-05-19 19:59:12.570283 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:12.570287 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.570291 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:12.570295 | orchestrator | 2025-05-19 19:59:12.570299 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-19 19:59:12.570302 | orchestrator | Monday 19 May 2025 19:56:14 +0000 (0:00:10.789) 0:00:46.417 ************ 2025-05-19 19:59:12.570308 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:12.570314 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:12.570320 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:12.570325 | orchestrator | 2025-05-19 19:59:12.570331 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-19 19:59:12.570336 | orchestrator | Monday 19 May 2025 19:56:17 +0000 (0:00:02.966) 0:00:49.383 ************ 2025-05-19 19:59:12.570341 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:12.570351 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:12.570357 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:12.570363 | orchestrator | 2025-05-19 19:59:12.570369 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-19 19:59:12.570375 | orchestrator | Monday 19 May 2025 19:56:19 +0000 (0:00:01.581) 0:00:50.965 ************ 2025-05-19 19:59:12.570381 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:59:12.570387 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:59:12.570399 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:59:12.570405 | orchestrator | 2025-05-19 19:59:12.570411 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-19 19:59:12.570417 | orchestrator | Monday 19 May 2025 19:56:20 +0000 (0:00:01.137) 0:00:52.102 ************ 2025-05-19 19:59:12.570423 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570429 | orchestrator | 2025-05-19 19:59:12.570435 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-19 19:59:12.570442 | orchestrator | Monday 19 May 2025 19:56:20 +0000 (0:00:00.218) 0:00:52.320 ************ 2025-05-19 19:59:12.570448 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570455 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570459 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570463 | orchestrator | 2025-05-19 19:59:12.570467 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 19:59:12.570470 | orchestrator | Monday 19 May 2025 19:56:21 +0000 (0:00:00.712) 0:00:53.033 ************ 2025-05-19 19:59:12.570474 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 19:59:12.570478 | orchestrator | 2025-05-19 19:59:12.570482 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-19 19:59:12.570485 | orchestrator | Monday 19 May 2025 19:56:22 +0000 (0:00:01.088) 0:00:54.121 ************ 2025-05-19 19:59:12.570494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570519 | orchestrator | 2025-05-19 19:59:12.570523 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-19 19:59:12.570526 | orchestrator | Monday 19 May 2025 19:56:29 +0000 (0:00:06.885) 0:01:01.007 ************ 2025-05-19 19:59:12.570533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:59:12.570543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:59:12.570555 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:59:12.570571 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570574 | orchestrator | 2025-05-19 19:59:12.570578 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-19 19:59:12.570582 | orchestrator | Monday 19 May 2025 19:56:36 +0000 (0:00:06.789) 0:01:07.797 ************ 2025-05-19 19:59:12.570593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:59:12.570598 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:59:12.570606 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 19:59:12.570619 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570623 | orchestrator | 2025-05-19 19:59:12.570627 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-19 19:59:12.570631 | orchestrator | Monday 19 May 2025 19:56:39 +0000 (0:00:03.531) 0:01:11.329 ************ 2025-05-19 19:59:12.570635 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570638 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570642 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570646 | orchestrator | 2025-05-19 19:59:12.570651 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-19 19:59:12.570655 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:03.612) 0:01:14.941 ************ 2025-05-19 19:59:12.570659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570713 | orchestrator | 2025-05-19 19:59:12.570718 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-19 19:59:12.570722 | orchestrator | Monday 19 May 2025 19:56:46 +0000 (0:00:03.686) 0:01:18.628 ************ 2025-05-19 19:59:12.570726 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.570731 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:12.570735 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:12.570739 | orchestrator | 2025-05-19 19:59:12.570743 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-19 19:59:12.570748 | orchestrator | Monday 19 May 2025 19:56:56 +0000 (0:00:10.084) 0:01:28.713 ************ 2025-05-19 19:59:12.570752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570756 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570760 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570764 | orchestrator | 2025-05-19 19:59:12.570768 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-19 19:59:12.570773 | orchestrator | Monday 19 May 2025 19:57:11 +0000 (0:00:14.957) 0:01:43.670 ************ 2025-05-19 19:59:12.570777 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570781 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570785 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570789 | orchestrator | 2025-05-19 19:59:12.570794 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-19 19:59:12.570798 | orchestrator | Monday 19 May 2025 19:57:21 +0000 (0:00:09.607) 0:01:53.277 ************ 2025-05-19 19:59:12.570802 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570806 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570810 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570815 | orchestrator | 2025-05-19 19:59:12.570819 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-19 19:59:12.570823 | orchestrator | Monday 19 May 2025 19:57:30 +0000 (0:00:09.402) 0:02:02.680 ************ 2025-05-19 19:59:12.570827 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570834 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570838 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570843 | orchestrator | 2025-05-19 19:59:12.570847 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-19 19:59:12.570851 | orchestrator | Monday 19 May 2025 19:57:41 +0000 (0:00:10.515) 0:02:13.196 ************ 2025-05-19 19:59:12.570856 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570860 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570864 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570868 | orchestrator | 2025-05-19 19:59:12.570872 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-19 19:59:12.570877 | orchestrator | Monday 19 May 2025 19:57:41 +0000 (0:00:00.248) 0:02:13.444 ************ 2025-05-19 19:59:12.570885 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 19:59:12.570889 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570894 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 19:59:12.570898 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570903 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 19:59:12.570907 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570912 | orchestrator | 2025-05-19 19:59:12.570916 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-19 19:59:12.570920 | orchestrator | Monday 19 May 2025 19:57:47 +0000 (0:00:05.320) 0:02:18.765 ************ 2025-05-19 19:59:12.570927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 19:59:12.570967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 19:59:12.570972 | orchestrator | 2025-05-19 19:59:12.570975 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 19:59:12.570979 | orchestrator | Monday 19 May 2025 19:57:59 +0000 (0:00:12.043) 0:02:30.809 ************ 2025-05-19 19:59:12.570983 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:12.570987 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:12.570993 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:12.570997 | orchestrator | 2025-05-19 19:59:12.571003 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-19 19:59:12.571098 | orchestrator | Monday 19 May 2025 19:58:00 +0000 (0:00:00.940) 0:02:31.750 ************ 2025-05-19 19:59:12.571102 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.571106 | orchestrator | 2025-05-19 19:59:12.571109 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-19 19:59:12.571113 | orchestrator | Monday 19 May 2025 19:58:02 +0000 (0:00:02.427) 0:02:34.177 ************ 2025-05-19 19:59:12.571117 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.571121 | orchestrator | 2025-05-19 19:59:12.571124 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-19 19:59:12.571128 | orchestrator | Monday 19 May 2025 19:58:05 +0000 (0:00:02.693) 0:02:36.871 ************ 2025-05-19 19:59:12.571132 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.571136 | orchestrator | 2025-05-19 19:59:12.571139 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-19 19:59:12.571143 | orchestrator | Monday 19 May 2025 19:58:07 +0000 (0:00:02.506) 0:02:39.377 ************ 2025-05-19 19:59:12.571147 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.571150 | orchestrator | 2025-05-19 19:59:12.571154 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-19 19:59:12.571158 | orchestrator | Monday 19 May 2025 19:58:36 +0000 (0:00:29.198) 0:03:08.576 ************ 2025-05-19 19:59:12.571162 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.571165 | orchestrator | 2025-05-19 19:59:12.571169 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 19:59:12.571173 | orchestrator | Monday 19 May 2025 19:58:39 +0000 (0:00:02.602) 0:03:11.178 ************ 2025-05-19 19:59:12.571177 | orchestrator | 2025-05-19 19:59:12.571181 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 19:59:12.571185 | orchestrator | Monday 19 May 2025 19:58:39 +0000 (0:00:00.051) 0:03:11.229 ************ 2025-05-19 19:59:12.571189 | orchestrator | 2025-05-19 19:59:12.571193 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 19:59:12.571196 | orchestrator | Monday 19 May 2025 19:58:39 +0000 (0:00:00.047) 0:03:11.277 ************ 2025-05-19 19:59:12.571200 | orchestrator | 2025-05-19 19:59:12.571204 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-19 19:59:12.571207 | orchestrator | Monday 19 May 2025 19:58:39 +0000 (0:00:00.131) 0:03:11.409 ************ 2025-05-19 19:59:12.571211 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:12.571215 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:12.571218 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:12.571222 | orchestrator | 2025-05-19 19:59:12.571226 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:59:12.571231 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 19:59:12.571237 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 19:59:12.571241 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 19:59:12.571245 | orchestrator | 2025-05-19 19:59:12.571249 | orchestrator | 2025-05-19 19:59:12.571253 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:59:12.571256 | orchestrator | Monday 19 May 2025 19:59:11 +0000 (0:00:31.523) 0:03:42.932 ************ 2025-05-19 19:59:12.571260 | orchestrator | =============================================================================== 2025-05-19 19:59:12.571264 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.52s 2025-05-19 19:59:12.571272 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.20s 2025-05-19 19:59:12.571279 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 14.96s 2025-05-19 19:59:12.571283 | orchestrator | glance : Check glance containers --------------------------------------- 12.04s 2025-05-19 19:59:12.571287 | orchestrator | glance : Ensuring glance service ceph config subdir exists ------------- 10.79s 2025-05-19 19:59:12.571290 | orchestrator | glance : Copying over property-protections-rules.conf ------------------ 10.52s 2025-05-19 19:59:12.571294 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.08s 2025-05-19 19:59:12.571298 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 9.61s 2025-05-19 19:59:12.571302 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 9.40s 2025-05-19 19:59:12.571306 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.00s 2025-05-19 19:59:12.571310 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.89s 2025-05-19 19:59:12.571313 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.79s 2025-05-19 19:59:12.571317 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.99s 2025-05-19 19:59:12.571321 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.32s 2025-05-19 19:59:12.571325 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.98s 2025-05-19 19:59:12.571329 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.02s 2025-05-19 19:59:12.571332 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.94s 2025-05-19 19:59:12.571336 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.77s 2025-05-19 19:59:12.571340 | orchestrator | glance : Copying over config.json files for services -------------------- 3.69s 2025-05-19 19:59:12.571347 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.61s 2025-05-19 19:59:12.571351 | orchestrator | 2025-05-19 19:59:12 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:12.571932 | orchestrator | 2025-05-19 19:59:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:12.571959 | orchestrator | 2025-05-19 19:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:15.621704 | orchestrator | 2025-05-19 19:59:15 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:15.622881 | orchestrator | 2025-05-19 19:59:15 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:15.624560 | orchestrator | 2025-05-19 19:59:15 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:15.628630 | orchestrator | 2025-05-19 19:59:15 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:15.630196 | orchestrator | 2025-05-19 19:59:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:15.630303 | orchestrator | 2025-05-19 19:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:18.681243 | orchestrator | 2025-05-19 19:59:18 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:18.682197 | orchestrator | 2025-05-19 19:59:18 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:18.683188 | orchestrator | 2025-05-19 19:59:18 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:18.684250 | orchestrator | 2025-05-19 19:59:18 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:18.685326 | orchestrator | 2025-05-19 19:59:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:18.685380 | orchestrator | 2025-05-19 19:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:21.737627 | orchestrator | 2025-05-19 19:59:21 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:21.739181 | orchestrator | 2025-05-19 19:59:21 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:21.740487 | orchestrator | 2025-05-19 19:59:21 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:21.741785 | orchestrator | 2025-05-19 19:59:21 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:21.743042 | orchestrator | 2025-05-19 19:59:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:21.743095 | orchestrator | 2025-05-19 19:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:24.800528 | orchestrator | 2025-05-19 19:59:24 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:24.803412 | orchestrator | 2025-05-19 19:59:24 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:24.805110 | orchestrator | 2025-05-19 19:59:24 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:24.806536 | orchestrator | 2025-05-19 19:59:24 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:24.807941 | orchestrator | 2025-05-19 19:59:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:24.807977 | orchestrator | 2025-05-19 19:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:27.857134 | orchestrator | 2025-05-19 19:59:27 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:27.858621 | orchestrator | 2025-05-19 19:59:27 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:27.860059 | orchestrator | 2025-05-19 19:59:27 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:27.861321 | orchestrator | 2025-05-19 19:59:27 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:27.862615 | orchestrator | 2025-05-19 19:59:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:27.862659 | orchestrator | 2025-05-19 19:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:30.916537 | orchestrator | 2025-05-19 19:59:30 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:30.916659 | orchestrator | 2025-05-19 19:59:30 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:30.918357 | orchestrator | 2025-05-19 19:59:30 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:30.920418 | orchestrator | 2025-05-19 19:59:30 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:30.922532 | orchestrator | 2025-05-19 19:59:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:30.922585 | orchestrator | 2025-05-19 19:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:33.978382 | orchestrator | 2025-05-19 19:59:33 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:33.982430 | orchestrator | 2025-05-19 19:59:33 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:33.983974 | orchestrator | 2025-05-19 19:59:33 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:33.985731 | orchestrator | 2025-05-19 19:59:33 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:33.987136 | orchestrator | 2025-05-19 19:59:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:33.987162 | orchestrator | 2025-05-19 19:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:37.048614 | orchestrator | 2025-05-19 19:59:37 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:37.051208 | orchestrator | 2025-05-19 19:59:37 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:37.055551 | orchestrator | 2025-05-19 19:59:37 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:37.057662 | orchestrator | 2025-05-19 19:59:37 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:37.058840 | orchestrator | 2025-05-19 19:59:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:37.058947 | orchestrator | 2025-05-19 19:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:40.101496 | orchestrator | 2025-05-19 19:59:40 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:40.102823 | orchestrator | 2025-05-19 19:59:40 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state STARTED 2025-05-19 19:59:40.104043 | orchestrator | 2025-05-19 19:59:40 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:40.105350 | orchestrator | 2025-05-19 19:59:40 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:40.106422 | orchestrator | 2025-05-19 19:59:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:40.106447 | orchestrator | 2025-05-19 19:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:43.154841 | orchestrator | 2025-05-19 19:59:43 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:43.157945 | orchestrator | 2025-05-19 19:59:43 | INFO  | Task edb9ba7b-bbff-4f81-8407-b8b36a5f552e is in state SUCCESS 2025-05-19 19:59:43.159793 | orchestrator | 2025-05-19 19:59:43.159844 | orchestrator | 2025-05-19 19:59:43.159855 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 19:59:43.159865 | orchestrator | 2025-05-19 19:59:43.159875 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 19:59:43.159884 | orchestrator | Monday 19 May 2025 19:56:22 +0000 (0:00:00.519) 0:00:00.519 ************ 2025-05-19 19:59:43.159893 | orchestrator | ok: [testbed-node-0] 2025-05-19 19:59:43.159904 | orchestrator | ok: [testbed-node-1] 2025-05-19 19:59:43.159913 | orchestrator | ok: [testbed-node-2] 2025-05-19 19:59:43.159922 | orchestrator | ok: [testbed-node-3] 2025-05-19 19:59:43.159930 | orchestrator | ok: [testbed-node-4] 2025-05-19 19:59:43.159939 | orchestrator | ok: [testbed-node-5] 2025-05-19 19:59:43.159947 | orchestrator | 2025-05-19 19:59:43.159956 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 19:59:43.160057 | orchestrator | Monday 19 May 2025 19:56:23 +0000 (0:00:00.727) 0:00:01.247 ************ 2025-05-19 19:59:43.160072 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-19 19:59:43.160081 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-19 19:59:43.160090 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-19 19:59:43.160099 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-19 19:59:43.160108 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-19 19:59:43.160117 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-19 19:59:43.160126 | orchestrator | 2025-05-19 19:59:43.160134 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-19 19:59:43.160171 | orchestrator | 2025-05-19 19:59:43.160181 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 19:59:43.160189 | orchestrator | Monday 19 May 2025 19:56:24 +0000 (0:00:01.368) 0:00:02.616 ************ 2025-05-19 19:59:43.160198 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:59:43.160210 | orchestrator | 2025-05-19 19:59:43.160218 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-19 19:59:43.160227 | orchestrator | Monday 19 May 2025 19:56:26 +0000 (0:00:02.330) 0:00:04.946 ************ 2025-05-19 19:59:43.160238 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-19 19:59:43.160252 | orchestrator | 2025-05-19 19:59:43.160267 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-19 19:59:43.160281 | orchestrator | Monday 19 May 2025 19:56:31 +0000 (0:00:04.559) 0:00:09.506 ************ 2025-05-19 19:59:43.160295 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-19 19:59:43.160310 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-19 19:59:43.160323 | orchestrator | 2025-05-19 19:59:43.160336 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-19 19:59:43.160830 | orchestrator | Monday 19 May 2025 19:56:39 +0000 (0:00:07.969) 0:00:17.476 ************ 2025-05-19 19:59:43.160854 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 19:59:43.160863 | orchestrator | 2025-05-19 19:59:43.160872 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-19 19:59:43.160881 | orchestrator | Monday 19 May 2025 19:56:43 +0000 (0:00:03.801) 0:00:21.277 ************ 2025-05-19 19:59:43.160890 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 19:59:43.160899 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-19 19:59:43.160908 | orchestrator | 2025-05-19 19:59:43.160917 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-19 19:59:43.160925 | orchestrator | Monday 19 May 2025 19:56:47 +0000 (0:00:04.156) 0:00:25.433 ************ 2025-05-19 19:59:43.160935 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 19:59:43.160943 | orchestrator | 2025-05-19 19:59:43.160952 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-19 19:59:43.160961 | orchestrator | Monday 19 May 2025 19:56:50 +0000 (0:00:03.430) 0:00:28.864 ************ 2025-05-19 19:59:43.160969 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-19 19:59:43.160978 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-19 19:59:43.160987 | orchestrator | 2025-05-19 19:59:43.160995 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-19 19:59:43.161004 | orchestrator | Monday 19 May 2025 19:57:00 +0000 (0:00:09.565) 0:00:38.429 ************ 2025-05-19 19:59:43.161104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.161131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.161141 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.161152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.161439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.161457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.161520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.161553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.161569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.161586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.161601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.162309 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.162376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.162391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162407 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.162473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.162504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.162520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.162536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162569 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162649 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.162686 | orchestrator | 2025-05-19 19:59:43.162701 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 19:59:43.162831 | orchestrator | Monday 19 May 2025 19:57:04 +0000 (0:00:04.640) 0:00:43.070 ************ 2025-05-19 19:59:43.162846 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.162855 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.162864 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.162873 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:59:43.162882 | orchestrator | 2025-05-19 19:59:43.162891 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-19 19:59:43.162900 | orchestrator | Monday 19 May 2025 19:57:07 +0000 (0:00:02.952) 0:00:46.023 ************ 2025-05-19 19:59:43.162909 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-19 19:59:43.162918 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-19 19:59:43.162932 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-19 19:59:43.162946 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-19 19:59:43.162960 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-19 19:59:43.162974 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-19 19:59:43.162987 | orchestrator | 2025-05-19 19:59:43.163002 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-19 19:59:43.163079 | orchestrator | Monday 19 May 2025 19:57:13 +0000 (0:00:05.550) 0:00:51.573 ************ 2025-05-19 19:59:43.163098 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 19:59:43.163128 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 19:59:43.163187 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 19:59:43.163204 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 19:59:43.163220 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 19:59:43.163235 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 19:59:43.163265 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 19:59:43.163325 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 19:59:43.163336 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 19:59:43.163345 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 19:59:43.163362 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 19:59:43.163402 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 19:59:43.163418 | orchestrator | 2025-05-19 19:59:43.163431 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-19 19:59:43.163445 | orchestrator | Monday 19 May 2025 19:57:18 +0000 (0:00:04.770) 0:00:56.343 ************ 2025-05-19 19:59:43.163458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:43.163472 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:43.163486 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 19:59:43.163499 | orchestrator | 2025-05-19 19:59:43.163513 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-19 19:59:43.163527 | orchestrator | Monday 19 May 2025 19:57:20 +0000 (0:00:02.315) 0:00:58.659 ************ 2025-05-19 19:59:43.163540 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-19 19:59:43.163552 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-19 19:59:43.163560 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-19 19:59:43.163568 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 19:59:43.163575 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 19:59:43.163583 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 19:59:43.163591 | orchestrator | 2025-05-19 19:59:43.163599 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-19 19:59:43.163606 | orchestrator | Monday 19 May 2025 19:57:24 +0000 (0:00:03.494) 0:01:02.153 ************ 2025-05-19 19:59:43.163614 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-19 19:59:43.163622 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-19 19:59:43.163630 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-19 19:59:43.163638 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-19 19:59:43.163646 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-19 19:59:43.163654 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-19 19:59:43.163662 | orchestrator | 2025-05-19 19:59:43.163669 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-19 19:59:43.163677 | orchestrator | Monday 19 May 2025 19:57:25 +0000 (0:00:01.862) 0:01:04.016 ************ 2025-05-19 19:59:43.163693 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.163701 | orchestrator | 2025-05-19 19:59:43.163709 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-19 19:59:43.163717 | orchestrator | Monday 19 May 2025 19:57:26 +0000 (0:00:00.259) 0:01:04.276 ************ 2025-05-19 19:59:43.163724 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.163732 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.163740 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.163747 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.163755 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.163763 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.163770 | orchestrator | 2025-05-19 19:59:43.163779 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 19:59:43.163786 | orchestrator | Monday 19 May 2025 19:57:27 +0000 (0:00:01.310) 0:01:05.587 ************ 2025-05-19 19:59:43.163796 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 19:59:43.163805 | orchestrator | 2025-05-19 19:59:43.163813 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-19 19:59:43.163821 | orchestrator | Monday 19 May 2025 19:57:28 +0000 (0:00:01.304) 0:01:06.891 ************ 2025-05-19 19:59:43.163829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.163872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.163883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.163906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163914 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.163994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.164002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.164035 | orchestrator | 2025-05-19 19:59:43.164045 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-19 19:59:43.164053 | orchestrator | Monday 19 May 2025 19:57:32 +0000 (0:00:03.372) 0:01:10.264 ************ 2025-05-19 19:59:43.164091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164119 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.164127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164144 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.164155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164230 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.164317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164344 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.164352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164410 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.164424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164432 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.164440 | orchestrator | 2025-05-19 19:59:43.164449 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-19 19:59:43.164457 | orchestrator | Monday 19 May 2025 19:57:35 +0000 (0:00:02.972) 0:01:13.237 ************ 2025-05-19 19:59:43.164465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164482 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.164490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164546 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.164554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164571 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.164579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164595 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.164630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164717 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.164727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164745 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.164753 | orchestrator | 2025-05-19 19:59:43.164761 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-19 19:59:43.164769 | orchestrator | Monday 19 May 2025 19:57:38 +0000 (0:00:03.046) 0:01:16.285 ************ 2025-05-19 19:59:43.164777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.164856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.164892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.164908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.164924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.164933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.164974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.164984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.164992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165213 | orchestrator | 2025-05-19 19:59:43.165221 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-19 19:59:43.165229 | orchestrator | Monday 19 May 2025 19:57:41 +0000 (0:00:03.271) 0:01:19.556 ************ 2025-05-19 19:59:43.165237 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 19:59:43.165245 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.165253 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 19:59:43.165260 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.165268 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 19:59:43.165276 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.165284 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 19:59:43.165292 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 19:59:43.165300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 19:59:43.165307 | orchestrator | 2025-05-19 19:59:43.165315 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-19 19:59:43.165323 | orchestrator | Monday 19 May 2025 19:57:44 +0000 (0:00:02.790) 0:01:22.347 ************ 2025-05-19 19:59:43.165331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.165424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.165442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.165452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.165676 | orchestrator | 2025-05-19 19:59:43.165698 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-19 19:59:43.165710 | orchestrator | Monday 19 May 2025 19:58:01 +0000 (0:00:17.234) 0:01:39.581 ************ 2025-05-19 19:59:43.165721 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.165732 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.165742 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.165752 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:59:43.165762 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:59:43.165773 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:59:43.165783 | orchestrator | 2025-05-19 19:59:43.165794 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-19 19:59:43.165804 | orchestrator | Monday 19 May 2025 19:58:03 +0000 (0:00:02.283) 0:01:41.865 ************ 2025-05-19 19:59:43.165815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165892 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.165900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165954 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.165961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165972 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.165979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.165986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.165993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166069 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.166076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.166091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166112 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.166130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.166138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166165 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.166172 | orchestrator | 2025-05-19 19:59:43.166178 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-19 19:59:43.166185 | orchestrator | Monday 19 May 2025 19:58:05 +0000 (0:00:01.492) 0:01:43.358 ************ 2025-05-19 19:59:43.166192 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.166199 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.166205 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.166212 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.166219 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.166225 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.166232 | orchestrator | 2025-05-19 19:59:43.166239 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-19 19:59:43.166245 | orchestrator | Monday 19 May 2025 19:58:06 +0000 (0:00:00.961) 0:01:44.319 ************ 2025-05-19 19:59:43.166260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.166267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.166285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 19:59:43.166299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.166326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.166333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 19:59:43.166340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 19:59:43.166458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 19:59:43.166483 | orchestrator | 2025-05-19 19:59:43.166490 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 19:59:43.166497 | orchestrator | Monday 19 May 2025 19:58:09 +0000 (0:00:03.230) 0:01:47.550 ************ 2025-05-19 19:59:43.166504 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.166510 | orchestrator | skipping: [testbed-node-1] 2025-05-19 19:59:43.166517 | orchestrator | skipping: [testbed-node-2] 2025-05-19 19:59:43.166524 | orchestrator | skipping: [testbed-node-3] 2025-05-19 19:59:43.166530 | orchestrator | skipping: [testbed-node-4] 2025-05-19 19:59:43.166537 | orchestrator | skipping: [testbed-node-5] 2025-05-19 19:59:43.166543 | orchestrator | 2025-05-19 19:59:43.166550 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-19 19:59:43.166557 | orchestrator | Monday 19 May 2025 19:58:10 +0000 (0:00:00.939) 0:01:48.489 ************ 2025-05-19 19:59:43.166563 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:43.166570 | orchestrator | 2025-05-19 19:59:43.166577 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-19 19:59:43.166583 | orchestrator | Monday 19 May 2025 19:58:13 +0000 (0:00:02.778) 0:01:51.268 ************ 2025-05-19 19:59:43.166590 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:43.166596 | orchestrator | 2025-05-19 19:59:43.166603 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-19 19:59:43.166610 | orchestrator | Monday 19 May 2025 19:58:15 +0000 (0:00:02.515) 0:01:53.784 ************ 2025-05-19 19:59:43.166617 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:43.166623 | orchestrator | 2025-05-19 19:59:43.166630 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 19:59:43.166637 | orchestrator | Monday 19 May 2025 19:58:34 +0000 (0:00:18.782) 0:02:12.567 ************ 2025-05-19 19:59:43.166643 | orchestrator | 2025-05-19 19:59:43.166650 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 19:59:43.166656 | orchestrator | Monday 19 May 2025 19:58:34 +0000 (0:00:00.128) 0:02:12.695 ************ 2025-05-19 19:59:43.166663 | orchestrator | 2025-05-19 19:59:43.166670 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 19:59:43.166682 | orchestrator | Monday 19 May 2025 19:58:34 +0000 (0:00:00.386) 0:02:13.081 ************ 2025-05-19 19:59:43.166689 | orchestrator | 2025-05-19 19:59:43.166696 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 19:59:43.166702 | orchestrator | Monday 19 May 2025 19:58:34 +0000 (0:00:00.055) 0:02:13.137 ************ 2025-05-19 19:59:43.166709 | orchestrator | 2025-05-19 19:59:43.166716 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 19:59:43.166722 | orchestrator | Monday 19 May 2025 19:58:35 +0000 (0:00:00.093) 0:02:13.230 ************ 2025-05-19 19:59:43.166729 | orchestrator | 2025-05-19 19:59:43.166735 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 19:59:43.166742 | orchestrator | Monday 19 May 2025 19:58:35 +0000 (0:00:00.056) 0:02:13.286 ************ 2025-05-19 19:59:43.166748 | orchestrator | 2025-05-19 19:59:43.166755 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-19 19:59:43.166762 | orchestrator | Monday 19 May 2025 19:58:35 +0000 (0:00:00.486) 0:02:13.772 ************ 2025-05-19 19:59:43.166768 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:43.166775 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:43.166781 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:43.166788 | orchestrator | 2025-05-19 19:59:43.166795 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-19 19:59:43.166801 | orchestrator | Monday 19 May 2025 19:58:57 +0000 (0:00:21.813) 0:02:35.586 ************ 2025-05-19 19:59:43.166808 | orchestrator | changed: [testbed-node-2] 2025-05-19 19:59:43.166815 | orchestrator | changed: [testbed-node-1] 2025-05-19 19:59:43.166822 | orchestrator | changed: [testbed-node-0] 2025-05-19 19:59:43.166828 | orchestrator | 2025-05-19 19:59:43.166838 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-19 19:59:43.166849 | orchestrator | Monday 19 May 2025 19:59:06 +0000 (0:00:09.193) 0:02:44.779 ************ 2025-05-19 19:59:43.166856 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:59:43.166862 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:59:43.166869 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:59:43.166875 | orchestrator | 2025-05-19 19:59:43.166882 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-19 19:59:43.166889 | orchestrator | Monday 19 May 2025 19:59:29 +0000 (0:00:23.336) 0:03:08.116 ************ 2025-05-19 19:59:43.166895 | orchestrator | changed: [testbed-node-3] 2025-05-19 19:59:43.166902 | orchestrator | changed: [testbed-node-4] 2025-05-19 19:59:43.166908 | orchestrator | changed: [testbed-node-5] 2025-05-19 19:59:43.166915 | orchestrator | 2025-05-19 19:59:43.166922 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-19 19:59:43.166928 | orchestrator | Monday 19 May 2025 19:59:41 +0000 (0:00:11.169) 0:03:19.286 ************ 2025-05-19 19:59:43.166935 | orchestrator | skipping: [testbed-node-0] 2025-05-19 19:59:43.166941 | orchestrator | 2025-05-19 19:59:43.166948 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 19:59:43.166956 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 19:59:43.166963 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 19:59:43.166970 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 19:59:43.166977 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:59:43.166984 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:59:43.166997 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 19:59:43.167004 | orchestrator | 2025-05-19 19:59:43.167034 | orchestrator | 2025-05-19 19:59:43.167041 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 19:59:43.167048 | orchestrator | Monday 19 May 2025 19:59:41 +0000 (0:00:00.612) 0:03:19.899 ************ 2025-05-19 19:59:43.167054 | orchestrator | =============================================================================== 2025-05-19 19:59:43.167061 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 23.34s 2025-05-19 19:59:43.167068 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.81s 2025-05-19 19:59:43.167074 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.78s 2025-05-19 19:59:43.167081 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 17.23s 2025-05-19 19:59:43.167087 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.17s 2025-05-19 19:59:43.167094 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.57s 2025-05-19 19:59:43.167101 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.19s 2025-05-19 19:59:43.167107 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.97s 2025-05-19 19:59:43.167114 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 5.55s 2025-05-19 19:59:43.167121 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.77s 2025-05-19 19:59:43.167127 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.64s 2025-05-19 19:59:43.167134 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.56s 2025-05-19 19:59:43.167141 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.16s 2025-05-19 19:59:43.167147 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.80s 2025-05-19 19:59:43.167154 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.49s 2025-05-19 19:59:43.167160 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.43s 2025-05-19 19:59:43.167167 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.37s 2025-05-19 19:59:43.167173 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.27s 2025-05-19 19:59:43.167180 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.23s 2025-05-19 19:59:43.167187 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 3.05s 2025-05-19 19:59:43.167193 | orchestrator | 2025-05-19 19:59:43 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:43.167200 | orchestrator | 2025-05-19 19:59:43 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 19:59:43.167207 | orchestrator | 2025-05-19 19:59:43 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:43.167214 | orchestrator | 2025-05-19 19:59:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:43.167227 | orchestrator | 2025-05-19 19:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:46.223434 | orchestrator | 2025-05-19 19:59:46 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:46.225553 | orchestrator | 2025-05-19 19:59:46 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:46.227277 | orchestrator | 2025-05-19 19:59:46 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 19:59:46.228410 | orchestrator | 2025-05-19 19:59:46 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:46.229440 | orchestrator | 2025-05-19 19:59:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:46.229469 | orchestrator | 2025-05-19 19:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:49.286255 | orchestrator | 2025-05-19 19:59:49 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:49.286386 | orchestrator | 2025-05-19 19:59:49 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:49.286412 | orchestrator | 2025-05-19 19:59:49 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 19:59:49.286432 | orchestrator | 2025-05-19 19:59:49 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:49.286857 | orchestrator | 2025-05-19 19:59:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:49.286882 | orchestrator | 2025-05-19 19:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:52.336635 | orchestrator | 2025-05-19 19:59:52 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:52.337976 | orchestrator | 2025-05-19 19:59:52 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:52.339133 | orchestrator | 2025-05-19 19:59:52 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 19:59:52.339989 | orchestrator | 2025-05-19 19:59:52 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:52.341299 | orchestrator | 2025-05-19 19:59:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:52.341340 | orchestrator | 2025-05-19 19:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:55.389817 | orchestrator | 2025-05-19 19:59:55 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:55.389982 | orchestrator | 2025-05-19 19:59:55 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:55.391316 | orchestrator | 2025-05-19 19:59:55 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 19:59:55.392569 | orchestrator | 2025-05-19 19:59:55 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:55.393625 | orchestrator | 2025-05-19 19:59:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:55.393648 | orchestrator | 2025-05-19 19:59:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 19:59:58.439649 | orchestrator | 2025-05-19 19:59:58 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 19:59:58.440775 | orchestrator | 2025-05-19 19:59:58 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 19:59:58.442485 | orchestrator | 2025-05-19 19:59:58 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 19:59:58.443823 | orchestrator | 2025-05-19 19:59:58 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 19:59:58.445151 | orchestrator | 2025-05-19 19:59:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 19:59:58.445186 | orchestrator | 2025-05-19 19:59:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:01.495851 | orchestrator | 2025-05-19 20:00:01 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 20:00:01.496070 | orchestrator | 2025-05-19 20:00:01 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:01.496811 | orchestrator | 2025-05-19 20:00:01 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:01.497548 | orchestrator | 2025-05-19 20:00:01 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:01.498571 | orchestrator | 2025-05-19 20:00:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:01.498751 | orchestrator | 2025-05-19 20:00:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:04.538012 | orchestrator | 2025-05-19 20:00:04 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 20:00:04.538888 | orchestrator | 2025-05-19 20:00:04 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:04.540235 | orchestrator | 2025-05-19 20:00:04 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:04.541847 | orchestrator | 2025-05-19 20:00:04 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:04.543442 | orchestrator | 2025-05-19 20:00:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:04.543509 | orchestrator | 2025-05-19 20:00:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:07.583929 | orchestrator | 2025-05-19 20:00:07 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 20:00:07.586216 | orchestrator | 2025-05-19 20:00:07 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:07.588374 | orchestrator | 2025-05-19 20:00:07 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:07.590195 | orchestrator | 2025-05-19 20:00:07 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:07.591586 | orchestrator | 2025-05-19 20:00:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:07.591823 | orchestrator | 2025-05-19 20:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:10.647123 | orchestrator | 2025-05-19 20:00:10 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state STARTED 2025-05-19 20:00:10.649014 | orchestrator | 2025-05-19 20:00:10 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:10.651179 | orchestrator | 2025-05-19 20:00:10 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:10.652728 | orchestrator | 2025-05-19 20:00:10 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:10.654564 | orchestrator | 2025-05-19 20:00:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:10.654642 | orchestrator | 2025-05-19 20:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:13.699138 | orchestrator | 2025-05-19 20:00:13 | INFO  | Task f4c43742-8142-495a-baca-9d271e629d63 is in state SUCCESS 2025-05-19 20:00:13.699855 | orchestrator | 2025-05-19 20:00:13 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:13.701282 | orchestrator | 2025-05-19 20:00:13 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:13.703310 | orchestrator | 2025-05-19 20:00:13 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:13.704728 | orchestrator | 2025-05-19 20:00:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:13.704800 | orchestrator | 2025-05-19 20:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:16.752350 | orchestrator | 2025-05-19 20:00:16 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:16.754420 | orchestrator | 2025-05-19 20:00:16 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:16.755549 | orchestrator | 2025-05-19 20:00:16 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:16.756733 | orchestrator | 2025-05-19 20:00:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:16.756797 | orchestrator | 2025-05-19 20:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:19.801376 | orchestrator | 2025-05-19 20:00:19 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:19.803445 | orchestrator | 2025-05-19 20:00:19 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:19.805511 | orchestrator | 2025-05-19 20:00:19 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:19.807034 | orchestrator | 2025-05-19 20:00:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:19.807075 | orchestrator | 2025-05-19 20:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:22.850896 | orchestrator | 2025-05-19 20:00:22 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:22.853092 | orchestrator | 2025-05-19 20:00:22 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:22.854888 | orchestrator | 2025-05-19 20:00:22 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:22.857006 | orchestrator | 2025-05-19 20:00:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:22.857180 | orchestrator | 2025-05-19 20:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:25.910642 | orchestrator | 2025-05-19 20:00:25 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:25.911142 | orchestrator | 2025-05-19 20:00:25 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:25.914435 | orchestrator | 2025-05-19 20:00:25 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:25.916606 | orchestrator | 2025-05-19 20:00:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:25.916645 | orchestrator | 2025-05-19 20:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:28.963363 | orchestrator | 2025-05-19 20:00:28 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:28.965755 | orchestrator | 2025-05-19 20:00:28 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:28.966976 | orchestrator | 2025-05-19 20:00:28 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:28.968565 | orchestrator | 2025-05-19 20:00:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:28.968592 | orchestrator | 2025-05-19 20:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:32.014794 | orchestrator | 2025-05-19 20:00:32 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:32.016497 | orchestrator | 2025-05-19 20:00:32 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:32.018328 | orchestrator | 2025-05-19 20:00:32 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:32.020452 | orchestrator | 2025-05-19 20:00:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:32.021571 | orchestrator | 2025-05-19 20:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:35.073054 | orchestrator | 2025-05-19 20:00:35 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:35.073400 | orchestrator | 2025-05-19 20:00:35 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:35.074206 | orchestrator | 2025-05-19 20:00:35 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:35.075389 | orchestrator | 2025-05-19 20:00:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:35.075646 | orchestrator | 2025-05-19 20:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:38.126276 | orchestrator | 2025-05-19 20:00:38 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:38.127175 | orchestrator | 2025-05-19 20:00:38 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:38.128476 | orchestrator | 2025-05-19 20:00:38 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:38.130181 | orchestrator | 2025-05-19 20:00:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:38.130234 | orchestrator | 2025-05-19 20:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:41.171866 | orchestrator | 2025-05-19 20:00:41 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:41.172032 | orchestrator | 2025-05-19 20:00:41 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:41.173379 | orchestrator | 2025-05-19 20:00:41 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:41.174257 | orchestrator | 2025-05-19 20:00:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:41.174309 | orchestrator | 2025-05-19 20:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:44.236404 | orchestrator | 2025-05-19 20:00:44 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:44.238673 | orchestrator | 2025-05-19 20:00:44 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:44.240767 | orchestrator | 2025-05-19 20:00:44 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:44.242751 | orchestrator | 2025-05-19 20:00:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:44.242821 | orchestrator | 2025-05-19 20:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:47.315647 | orchestrator | 2025-05-19 20:00:47 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:47.317181 | orchestrator | 2025-05-19 20:00:47 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:47.318711 | orchestrator | 2025-05-19 20:00:47 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:47.320383 | orchestrator | 2025-05-19 20:00:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:47.320427 | orchestrator | 2025-05-19 20:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:50.373932 | orchestrator | 2025-05-19 20:00:50 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:50.375495 | orchestrator | 2025-05-19 20:00:50 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:50.375562 | orchestrator | 2025-05-19 20:00:50 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:50.375594 | orchestrator | 2025-05-19 20:00:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:50.375603 | orchestrator | 2025-05-19 20:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:53.415465 | orchestrator | 2025-05-19 20:00:53 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:53.416029 | orchestrator | 2025-05-19 20:00:53 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:53.417299 | orchestrator | 2025-05-19 20:00:53 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:53.418821 | orchestrator | 2025-05-19 20:00:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:53.418918 | orchestrator | 2025-05-19 20:00:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:56.465368 | orchestrator | 2025-05-19 20:00:56 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:56.466821 | orchestrator | 2025-05-19 20:00:56 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:56.468673 | orchestrator | 2025-05-19 20:00:56 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:56.470711 | orchestrator | 2025-05-19 20:00:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:56.471410 | orchestrator | 2025-05-19 20:00:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:00:59.520625 | orchestrator | 2025-05-19 20:00:59 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:00:59.521931 | orchestrator | 2025-05-19 20:00:59 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:00:59.523978 | orchestrator | 2025-05-19 20:00:59 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:00:59.525489 | orchestrator | 2025-05-19 20:00:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:00:59.525525 | orchestrator | 2025-05-19 20:00:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:02.578471 | orchestrator | 2025-05-19 20:01:02 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:02.578612 | orchestrator | 2025-05-19 20:01:02 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:02.579868 | orchestrator | 2025-05-19 20:01:02 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:02.581007 | orchestrator | 2025-05-19 20:01:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:02.581058 | orchestrator | 2025-05-19 20:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:05.639218 | orchestrator | 2025-05-19 20:01:05 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:05.640781 | orchestrator | 2025-05-19 20:01:05 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:05.641969 | orchestrator | 2025-05-19 20:01:05 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:05.643631 | orchestrator | 2025-05-19 20:01:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:05.643682 | orchestrator | 2025-05-19 20:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:08.698632 | orchestrator | 2025-05-19 20:01:08 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:08.700257 | orchestrator | 2025-05-19 20:01:08 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:08.702245 | orchestrator | 2025-05-19 20:01:08 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:08.703680 | orchestrator | 2025-05-19 20:01:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:08.703760 | orchestrator | 2025-05-19 20:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:11.757974 | orchestrator | 2025-05-19 20:01:11 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:11.759106 | orchestrator | 2025-05-19 20:01:11 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:11.760112 | orchestrator | 2025-05-19 20:01:11 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:11.761264 | orchestrator | 2025-05-19 20:01:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:11.761288 | orchestrator | 2025-05-19 20:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:14.818786 | orchestrator | 2025-05-19 20:01:14 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:14.819607 | orchestrator | 2025-05-19 20:01:14 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:14.819934 | orchestrator | 2025-05-19 20:01:14 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:14.820876 | orchestrator | 2025-05-19 20:01:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:14.820910 | orchestrator | 2025-05-19 20:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:17.867836 | orchestrator | 2025-05-19 20:01:17 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:17.869596 | orchestrator | 2025-05-19 20:01:17 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:17.871931 | orchestrator | 2025-05-19 20:01:17 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:17.874181 | orchestrator | 2025-05-19 20:01:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:17.874239 | orchestrator | 2025-05-19 20:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:20.925623 | orchestrator | 2025-05-19 20:01:20 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:20.927731 | orchestrator | 2025-05-19 20:01:20 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:20.930637 | orchestrator | 2025-05-19 20:01:20 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state STARTED 2025-05-19 20:01:20.932973 | orchestrator | 2025-05-19 20:01:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:20.933022 | orchestrator | 2025-05-19 20:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:24.008737 | orchestrator | 2025-05-19 20:01:24 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:24.013746 | orchestrator | 2025-05-19 20:01:24 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:24.015139 | orchestrator | 2025-05-19 20:01:24 | INFO  | Task 7b4749a1-cd50-4646-9e53-055eaa9f8e34 is in state SUCCESS 2025-05-19 20:01:24.016658 | orchestrator | 2025-05-19 20:01:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:24.016776 | orchestrator | 2025-05-19 20:01:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:27.059885 | orchestrator | 2025-05-19 20:01:27 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:27.060499 | orchestrator | 2025-05-19 20:01:27 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:27.062329 | orchestrator | 2025-05-19 20:01:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:27.062364 | orchestrator | 2025-05-19 20:01:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:30.111204 | orchestrator | 2025-05-19 20:01:30 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:30.112412 | orchestrator | 2025-05-19 20:01:30 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:30.114060 | orchestrator | 2025-05-19 20:01:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:30.114116 | orchestrator | 2025-05-19 20:01:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:33.155224 | orchestrator | 2025-05-19 20:01:33 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:33.156067 | orchestrator | 2025-05-19 20:01:33 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:33.156340 | orchestrator | 2025-05-19 20:01:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:33.156361 | orchestrator | 2025-05-19 20:01:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:36.205530 | orchestrator | 2025-05-19 20:01:36 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:36.206145 | orchestrator | 2025-05-19 20:01:36 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:36.206882 | orchestrator | 2025-05-19 20:01:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:36.207259 | orchestrator | 2025-05-19 20:01:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:39.252339 | orchestrator | 2025-05-19 20:01:39 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:39.261405 | orchestrator | 2025-05-19 20:01:39 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:39.263009 | orchestrator | 2025-05-19 20:01:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:39.263499 | orchestrator | 2025-05-19 20:01:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:42.321353 | orchestrator | 2025-05-19 20:01:42 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:42.323569 | orchestrator | 2025-05-19 20:01:42 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state STARTED 2025-05-19 20:01:42.326516 | orchestrator | 2025-05-19 20:01:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:42.326597 | orchestrator | 2025-05-19 20:01:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:45.379181 | orchestrator | 2025-05-19 20:01:45 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:45.379268 | orchestrator | 2025-05-19 20:01:45 | INFO  | Task a0992ffa-d864-4aca-ad9c-b7800bb3a38b is in state SUCCESS 2025-05-19 20:01:45.381509 | orchestrator | 2025-05-19 20:01:45.381545 | orchestrator | 2025-05-19 20:01:45.381555 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 20:01:45.381563 | orchestrator | 2025-05-19 20:01:45.381570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 20:01:45.381601 | orchestrator | Monday 19 May 2025 19:59:14 +0000 (0:00:00.362) 0:00:00.362 ************ 2025-05-19 20:01:45.381608 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.381617 | orchestrator | ok: [testbed-node-1] 2025-05-19 20:01:45.381621 | orchestrator | ok: [testbed-node-2] 2025-05-19 20:01:45.381625 | orchestrator | 2025-05-19 20:01:45.381629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 20:01:45.381633 | orchestrator | Monday 19 May 2025 19:59:15 +0000 (0:00:00.414) 0:00:00.776 ************ 2025-05-19 20:01:45.381637 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-19 20:01:45.381641 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-19 20:01:45.381645 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-19 20:01:45.381649 | orchestrator | 2025-05-19 20:01:45.381653 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-19 20:01:45.381657 | orchestrator | 2025-05-19 20:01:45.381660 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 20:01:45.381664 | orchestrator | Monday 19 May 2025 19:59:15 +0000 (0:00:00.307) 0:00:01.084 ************ 2025-05-19 20:01:45.381669 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:01:45.381674 | orchestrator | 2025-05-19 20:01:45.381678 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-19 20:01:45.381681 | orchestrator | Monday 19 May 2025 19:59:16 +0000 (0:00:00.826) 0:00:01.910 ************ 2025-05-19 20:01:45.381686 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-19 20:01:45.381690 | orchestrator | 2025-05-19 20:01:45.381705 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-19 20:01:45.381709 | orchestrator | Monday 19 May 2025 19:59:20 +0000 (0:00:03.730) 0:00:05.641 ************ 2025-05-19 20:01:45.381712 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-19 20:01:45.381716 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-19 20:01:45.381720 | orchestrator | 2025-05-19 20:01:45.381724 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-19 20:01:45.381728 | orchestrator | Monday 19 May 2025 19:59:27 +0000 (0:00:07.601) 0:00:13.242 ************ 2025-05-19 20:01:45.381741 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 20:01:45.381746 | orchestrator | 2025-05-19 20:01:45.381749 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-19 20:01:45.381753 | orchestrator | Monday 19 May 2025 19:59:31 +0000 (0:00:03.636) 0:00:16.879 ************ 2025-05-19 20:01:45.381757 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 20:01:45.381761 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-19 20:01:45.381766 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-19 20:01:45.381772 | orchestrator | 2025-05-19 20:01:45.381777 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-19 20:01:45.381784 | orchestrator | Monday 19 May 2025 19:59:39 +0000 (0:00:08.534) 0:00:25.415 ************ 2025-05-19 20:01:45.381790 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 20:01:45.381796 | orchestrator | 2025-05-19 20:01:45.381894 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-19 20:01:45.381902 | orchestrator | Monday 19 May 2025 19:59:43 +0000 (0:00:03.311) 0:00:28.726 ************ 2025-05-19 20:01:45.381908 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-19 20:01:45.381914 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-19 20:01:45.381920 | orchestrator | 2025-05-19 20:01:45.381955 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-19 20:01:45.381962 | orchestrator | Monday 19 May 2025 19:59:51 +0000 (0:00:08.545) 0:00:37.272 ************ 2025-05-19 20:01:45.381976 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-19 20:01:45.381983 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-19 20:01:45.381989 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-19 20:01:45.381997 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-19 20:01:45.382001 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-19 20:01:45.382004 | orchestrator | 2025-05-19 20:01:45.382008 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 20:01:45.382044 | orchestrator | Monday 19 May 2025 20:00:08 +0000 (0:00:16.849) 0:00:54.121 ************ 2025-05-19 20:01:45.382051 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:01:45.382058 | orchestrator | 2025-05-19 20:01:45.382065 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-19 20:01:45.382070 | orchestrator | Monday 19 May 2025 20:00:09 +0000 (0:00:00.878) 0:00:55.000 ************ 2025-05-19 20:01:45.382091 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-19 20:01:45.382101 | orchestrator | 2025-05-19 20:01:45.382108 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 20:01:45.382115 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-19 20:01:45.382120 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 20:01:45.382125 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 20:01:45.382129 | orchestrator | 2025-05-19 20:01:45.382132 | orchestrator | 2025-05-19 20:01:45.382192 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 20:01:45.382200 | orchestrator | Monday 19 May 2025 20:00:12 +0000 (0:00:03.584) 0:00:58.584 ************ 2025-05-19 20:01:45.382206 | orchestrator | =============================================================================== 2025-05-19 20:01:45.382213 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.85s 2025-05-19 20:01:45.382219 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.55s 2025-05-19 20:01:45.382226 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.53s 2025-05-19 20:01:45.382232 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.60s 2025-05-19 20:01:45.382239 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.73s 2025-05-19 20:01:45.382248 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.64s 2025-05-19 20:01:45.382252 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.58s 2025-05-19 20:01:45.382256 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.31s 2025-05-19 20:01:45.382261 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.88s 2025-05-19 20:01:45.382267 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.83s 2025-05-19 20:01:45.382273 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.41s 2025-05-19 20:01:45.382279 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2025-05-19 20:01:45.382292 | orchestrator | 2025-05-19 20:01:45.382299 | orchestrator | 2025-05-19 20:01:45.382305 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 20:01:45.382311 | orchestrator | 2025-05-19 20:01:45.382317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 20:01:45.382320 | orchestrator | Monday 19 May 2025 19:59:05 +0000 (0:00:00.174) 0:00:00.174 ************ 2025-05-19 20:01:45.382324 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.382329 | orchestrator | ok: [testbed-node-1] 2025-05-19 20:01:45.382333 | orchestrator | ok: [testbed-node-2] 2025-05-19 20:01:45.382336 | orchestrator | 2025-05-19 20:01:45.382340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 20:01:45.382344 | orchestrator | Monday 19 May 2025 19:59:06 +0000 (0:00:00.366) 0:00:00.540 ************ 2025-05-19 20:01:45.382348 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-19 20:01:45.382352 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-19 20:01:45.382355 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-19 20:01:45.382359 | orchestrator | 2025-05-19 20:01:45.382363 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-19 20:01:45.382367 | orchestrator | 2025-05-19 20:01:45.382371 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-19 20:01:45.382374 | orchestrator | Monday 19 May 2025 19:59:06 +0000 (0:00:00.504) 0:00:01.045 ************ 2025-05-19 20:01:45.382378 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.382382 | orchestrator | ok: [testbed-node-1] 2025-05-19 20:01:45.382386 | orchestrator | ok: [testbed-node-2] 2025-05-19 20:01:45.382391 | orchestrator | 2025-05-19 20:01:45.382397 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 20:01:45.382404 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 20:01:45.382410 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 20:01:45.382416 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 20:01:45.382422 | orchestrator | 2025-05-19 20:01:45.382429 | orchestrator | 2025-05-19 20:01:45.382436 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 20:01:45.382442 | orchestrator | Monday 19 May 2025 20:01:22 +0000 (0:02:15.234) 0:02:16.279 ************ 2025-05-19 20:01:45.382448 | orchestrator | =============================================================================== 2025-05-19 20:01:45.382454 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 135.23s 2025-05-19 20:01:45.382461 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-05-19 20:01:45.382465 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-05-19 20:01:45.382469 | orchestrator | 2025-05-19 20:01:45.382473 | orchestrator | 2025-05-19 20:01:45.382476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 20:01:45.382480 | orchestrator | 2025-05-19 20:01:45.382484 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 20:01:45.382495 | orchestrator | Monday 19 May 2025 19:59:45 +0000 (0:00:00.314) 0:00:00.315 ************ 2025-05-19 20:01:45.382499 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.382503 | orchestrator | ok: [testbed-node-1] 2025-05-19 20:01:45.382506 | orchestrator | ok: [testbed-node-2] 2025-05-19 20:01:45.382510 | orchestrator | 2025-05-19 20:01:45.382515 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 20:01:45.382521 | orchestrator | Monday 19 May 2025 19:59:45 +0000 (0:00:00.444) 0:00:00.759 ************ 2025-05-19 20:01:45.382527 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-19 20:01:45.382534 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-19 20:01:45.382546 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-19 20:01:45.382552 | orchestrator | 2025-05-19 20:01:45.382559 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-19 20:01:45.382565 | orchestrator | 2025-05-19 20:01:45.382570 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-19 20:01:45.382574 | orchestrator | Monday 19 May 2025 19:59:45 +0000 (0:00:00.311) 0:00:01.070 ************ 2025-05-19 20:01:45.382578 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:01:45.382582 | orchestrator | 2025-05-19 20:01:45.382586 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-19 20:01:45.382589 | orchestrator | Monday 19 May 2025 19:59:46 +0000 (0:00:00.753) 0:00:01.824 ************ 2025-05-19 20:01:45.382598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382613 | orchestrator | 2025-05-19 20:01:45.382617 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-19 20:01:45.382621 | orchestrator | Monday 19 May 2025 19:59:47 +0000 (0:00:00.892) 0:00:02.717 ************ 2025-05-19 20:01:45.382625 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-19 20:01:45.382629 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-19 20:01:45.382632 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 20:01:45.382636 | orchestrator | 2025-05-19 20:01:45.382640 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-19 20:01:45.382644 | orchestrator | Monday 19 May 2025 19:59:48 +0000 (0:00:00.547) 0:00:03.264 ************ 2025-05-19 20:01:45.382647 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:01:45.382653 | orchestrator | 2025-05-19 20:01:45.382659 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-19 20:01:45.382671 | orchestrator | Monday 19 May 2025 19:59:48 +0000 (0:00:00.685) 0:00:03.950 ************ 2025-05-19 20:01:45.382683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382707 | orchestrator | 2025-05-19 20:01:45.382712 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-19 20:01:45.382715 | orchestrator | Monday 19 May 2025 19:59:50 +0000 (0:00:01.652) 0:00:05.602 ************ 2025-05-19 20:01:45.382719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 20:01:45.382723 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.382727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 20:01:45.382731 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.382746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 20:01:45.382753 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.382759 | orchestrator | 2025-05-19 20:01:45.382765 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-19 20:01:45.382771 | orchestrator | Monday 19 May 2025 19:59:51 +0000 (0:00:00.802) 0:00:06.404 ************ 2025-05-19 20:01:45.382779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 20:01:45.382784 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.382791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 20:01:45.382795 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.382799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 20:01:45.382818 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.382822 | orchestrator | 2025-05-19 20:01:45.382826 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-19 20:01:45.382830 | orchestrator | Monday 19 May 2025 19:59:51 +0000 (0:00:00.708) 0:00:07.113 ************ 2025-05-19 20:01:45.382834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382855 | orchestrator | 2025-05-19 20:01:45.382860 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-19 20:01:45.382866 | orchestrator | Monday 19 May 2025 19:59:53 +0000 (0:00:01.476) 0:00:08.589 ************ 2025-05-19 20:01:45.382876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.382896 | orchestrator | 2025-05-19 20:01:45.382906 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-19 20:01:45.382912 | orchestrator | Monday 19 May 2025 19:59:55 +0000 (0:00:01.656) 0:00:10.246 ************ 2025-05-19 20:01:45.382972 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.382977 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.382981 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.382985 | orchestrator | 2025-05-19 20:01:45.382988 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-19 20:01:45.382992 | orchestrator | Monday 19 May 2025 19:59:55 +0000 (0:00:00.297) 0:00:10.544 ************ 2025-05-19 20:01:45.382996 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 20:01:45.383000 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 20:01:45.383004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 20:01:45.383007 | orchestrator | 2025-05-19 20:01:45.383011 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-19 20:01:45.383015 | orchestrator | Monday 19 May 2025 19:59:56 +0000 (0:00:01.517) 0:00:12.061 ************ 2025-05-19 20:01:45.383019 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 20:01:45.383023 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 20:01:45.383027 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 20:01:45.383030 | orchestrator | 2025-05-19 20:01:45.383038 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-19 20:01:45.383045 | orchestrator | Monday 19 May 2025 19:59:58 +0000 (0:00:01.474) 0:00:13.536 ************ 2025-05-19 20:01:45.383052 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 20:01:45.383058 | orchestrator | 2025-05-19 20:01:45.383064 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-19 20:01:45.383071 | orchestrator | Monday 19 May 2025 19:59:58 +0000 (0:00:00.458) 0:00:13.994 ************ 2025-05-19 20:01:45.383077 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-19 20:01:45.383083 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-19 20:01:45.383090 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.383096 | orchestrator | ok: [testbed-node-1] 2025-05-19 20:01:45.383102 | orchestrator | ok: [testbed-node-2] 2025-05-19 20:01:45.383109 | orchestrator | 2025-05-19 20:01:45.383116 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-19 20:01:45.383122 | orchestrator | Monday 19 May 2025 19:59:59 +0000 (0:00:00.915) 0:00:14.910 ************ 2025-05-19 20:01:45.383128 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.383133 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.383140 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.383147 | orchestrator | 2025-05-19 20:01:45.383151 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-19 20:01:45.383155 | orchestrator | Monday 19 May 2025 20:00:00 +0000 (0:00:00.463) 0:00:15.374 ************ 2025-05-19 20:01:45.383166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1077258, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2086995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1077258, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2086995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1077258, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2086995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1077242, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1966991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1077242, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1966991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1077242, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1966991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1077231, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1946993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1077231, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1946993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1077231, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1946993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1077253, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1996992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1077253, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1996992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1077253, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1996992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1077209, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1896992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1077209, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1896992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1077209, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1896992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1077234, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1946993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1077234, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1946993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1077234, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1946993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1077251, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1986992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1077251, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1986992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1077251, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1986992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1077203, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.188699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1077203, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.188699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1077203, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.188699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1077151, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1676989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1077151, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1676989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1077151, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1676989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1077213, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.190699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1077213, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.190699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1077213, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.190699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1077162, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1716988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1077162, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1716988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1077162, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1716988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1077247, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.1986992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1077247, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.1986992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1077247, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.1986992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1077215, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.1926992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1077215, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.1926992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1077215, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.1926992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1077256, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1077256, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1077256, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1077200, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.187699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1077200, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.187699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1077200, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.187699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1077237, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1956992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1077237, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1956992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1077237, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1956992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1077154, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1696987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1077154, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1696987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1077154, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1696987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1077168, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.173699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1077168, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.173699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1077168, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.173699, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1077226, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1936991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1077226, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1936991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1077226, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.1936991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1077304, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1077304, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1077304, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2417, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1077299, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2296996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1077299, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2296996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1077299, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2296996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1077331, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1077331, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1077331, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1077272, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2096994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1077272, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2096994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1077272, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2096994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1077351, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1077351, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1077351, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1077317, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2436998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.383991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1077317, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2436998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1077317, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2436998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1077321, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1077321, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1077321, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2447, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1077273, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2106993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1077273, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2106993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1077273, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2106993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1077302, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2306998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1077302, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2306998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1077302, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2306998, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1077356, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2577002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1077356, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2577002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1077356, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2577002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1077325, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1077325, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1077325, 'dev': 203, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1747681483.2457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1077280, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2146995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1077280, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2146995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1077280, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2146995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1077275, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2126994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1077275, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2126994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1077275, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2126994, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1077285, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2166996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1077285, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2166996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1077285, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2166996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1077291, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2286997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1077291, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2286997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1077291, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2286997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1077363, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2617002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1077363, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2617002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1077363, 'dev': 203, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1747681483.2617002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 20:01:45.384263 | orchestrator | 2025-05-19 20:01:45.384270 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-19 20:01:45.384277 | orchestrator | Monday 19 May 2025 20:00:34 +0000 (0:00:34.353) 0:00:49.727 ************ 2025-05-19 20:01:45.384287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.384291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.384299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 20:01:45.384305 | orchestrator | 2025-05-19 20:01:45.384312 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-19 20:01:45.384318 | orchestrator | Monday 19 May 2025 20:00:35 +0000 (0:00:01.095) 0:00:50.823 ************ 2025-05-19 20:01:45.384324 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:01:45.384332 | orchestrator | 2025-05-19 20:01:45.384338 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-19 20:01:45.384345 | orchestrator | Monday 19 May 2025 20:00:38 +0000 (0:00:02.838) 0:00:53.661 ************ 2025-05-19 20:01:45.384352 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:01:45.384359 | orchestrator | 2025-05-19 20:01:45.384363 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 20:01:45.384367 | orchestrator | Monday 19 May 2025 20:00:40 +0000 (0:00:02.375) 0:00:56.037 ************ 2025-05-19 20:01:45.384374 | orchestrator | 2025-05-19 20:01:45.384378 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 20:01:45.384382 | orchestrator | Monday 19 May 2025 20:00:40 +0000 (0:00:00.062) 0:00:56.099 ************ 2025-05-19 20:01:45.384386 | orchestrator | 2025-05-19 20:01:45.384389 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 20:01:45.384393 | orchestrator | Monday 19 May 2025 20:00:41 +0000 (0:00:00.075) 0:00:56.175 ************ 2025-05-19 20:01:45.384397 | orchestrator | 2025-05-19 20:01:45.384401 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-19 20:01:45.384404 | orchestrator | Monday 19 May 2025 20:00:41 +0000 (0:00:00.224) 0:00:56.399 ************ 2025-05-19 20:01:45.384409 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.384415 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.384421 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:01:45.384427 | orchestrator | 2025-05-19 20:01:45.384431 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-19 20:01:45.384435 | orchestrator | Monday 19 May 2025 20:00:43 +0000 (0:00:01.833) 0:00:58.233 ************ 2025-05-19 20:01:45.384438 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.384442 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.384446 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-19 20:01:45.384450 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-19 20:01:45.384454 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-19 20:01:45.384457 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.384461 | orchestrator | 2025-05-19 20:01:45.384465 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-19 20:01:45.384469 | orchestrator | Monday 19 May 2025 20:01:22 +0000 (0:00:39.319) 0:01:37.553 ************ 2025-05-19 20:01:45.384472 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.384476 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:01:45.384480 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:01:45.384483 | orchestrator | 2025-05-19 20:01:45.384487 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-19 20:01:45.384491 | orchestrator | Monday 19 May 2025 20:01:36 +0000 (0:00:14.360) 0:01:51.913 ************ 2025-05-19 20:01:45.384495 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:01:45.384498 | orchestrator | 2025-05-19 20:01:45.384502 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-19 20:01:45.384508 | orchestrator | Monday 19 May 2025 20:01:39 +0000 (0:00:02.340) 0:01:54.254 ************ 2025-05-19 20:01:45.384512 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.384516 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:01:45.384519 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:01:45.384523 | orchestrator | 2025-05-19 20:01:45.384527 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-19 20:01:45.384531 | orchestrator | Monday 19 May 2025 20:01:39 +0000 (0:00:00.470) 0:01:54.725 ************ 2025-05-19 20:01:45.384535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-19 20:01:45.384543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-19 20:01:45.384547 | orchestrator | 2025-05-19 20:01:45.384554 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-19 20:01:45.384558 | orchestrator | Monday 19 May 2025 20:01:42 +0000 (0:00:02.578) 0:01:57.304 ************ 2025-05-19 20:01:45.384561 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:01:45.384565 | orchestrator | 2025-05-19 20:01:45.384569 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 20:01:45.384576 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 20:01:45.384582 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 20:01:45.384586 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 20:01:45.384589 | orchestrator | 2025-05-19 20:01:45.384593 | orchestrator | 2025-05-19 20:01:45.384597 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 20:01:45.384601 | orchestrator | Monday 19 May 2025 20:01:42 +0000 (0:00:00.376) 0:01:57.680 ************ 2025-05-19 20:01:45.384604 | orchestrator | =============================================================================== 2025-05-19 20:01:45.384608 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.32s 2025-05-19 20:01:45.384612 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 34.35s 2025-05-19 20:01:45.384615 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 14.36s 2025-05-19 20:01:45.384619 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.84s 2025-05-19 20:01:45.384625 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.58s 2025-05-19 20:01:45.384631 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.38s 2025-05-19 20:01:45.384637 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.34s 2025-05-19 20:01:45.384643 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.83s 2025-05-19 20:01:45.384649 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.66s 2025-05-19 20:01:45.384655 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.65s 2025-05-19 20:01:45.384661 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.52s 2025-05-19 20:01:45.384667 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.48s 2025-05-19 20:01:45.384673 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.47s 2025-05-19 20:01:45.384679 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.10s 2025-05-19 20:01:45.384686 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.92s 2025-05-19 20:01:45.384690 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.89s 2025-05-19 20:01:45.384694 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.80s 2025-05-19 20:01:45.384698 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-05-19 20:01:45.384701 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.71s 2025-05-19 20:01:45.384705 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.69s 2025-05-19 20:01:45.384709 | orchestrator | 2025-05-19 20:01:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:45.384713 | orchestrator | 2025-05-19 20:01:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:48.429421 | orchestrator | 2025-05-19 20:01:48 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:48.430263 | orchestrator | 2025-05-19 20:01:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:48.430343 | orchestrator | 2025-05-19 20:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:51.490756 | orchestrator | 2025-05-19 20:01:51 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:51.491141 | orchestrator | 2025-05-19 20:01:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:51.491185 | orchestrator | 2025-05-19 20:01:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:54.539555 | orchestrator | 2025-05-19 20:01:54 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:54.540069 | orchestrator | 2025-05-19 20:01:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:54.540117 | orchestrator | 2025-05-19 20:01:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:01:57.602824 | orchestrator | 2025-05-19 20:01:57 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:01:57.604359 | orchestrator | 2025-05-19 20:01:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:01:57.605210 | orchestrator | 2025-05-19 20:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:00.651507 | orchestrator | 2025-05-19 20:02:00 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:00.654138 | orchestrator | 2025-05-19 20:02:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:00.654241 | orchestrator | 2025-05-19 20:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:03.703931 | orchestrator | 2025-05-19 20:02:03 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:03.705986 | orchestrator | 2025-05-19 20:02:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:03.706126 | orchestrator | 2025-05-19 20:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:06.758310 | orchestrator | 2025-05-19 20:02:06 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:06.759289 | orchestrator | 2025-05-19 20:02:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:06.759335 | orchestrator | 2025-05-19 20:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:09.809667 | orchestrator | 2025-05-19 20:02:09 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:09.811114 | orchestrator | 2025-05-19 20:02:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:09.811146 | orchestrator | 2025-05-19 20:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:12.856933 | orchestrator | 2025-05-19 20:02:12 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:12.859481 | orchestrator | 2025-05-19 20:02:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:12.859568 | orchestrator | 2025-05-19 20:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:15.904730 | orchestrator | 2025-05-19 20:02:15 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:15.905250 | orchestrator | 2025-05-19 20:02:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:15.905284 | orchestrator | 2025-05-19 20:02:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:18.948428 | orchestrator | 2025-05-19 20:02:18 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:18.949050 | orchestrator | 2025-05-19 20:02:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:18.949156 | orchestrator | 2025-05-19 20:02:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:21.976322 | orchestrator | 2025-05-19 20:02:21 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:21.976673 | orchestrator | 2025-05-19 20:02:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:21.976722 | orchestrator | 2025-05-19 20:02:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:25.039092 | orchestrator | 2025-05-19 20:02:25 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:25.039218 | orchestrator | 2025-05-19 20:02:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:25.039238 | orchestrator | 2025-05-19 20:02:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:28.072718 | orchestrator | 2025-05-19 20:02:28 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:28.074561 | orchestrator | 2025-05-19 20:02:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:28.074628 | orchestrator | 2025-05-19 20:02:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:31.118683 | orchestrator | 2025-05-19 20:02:31 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:31.119889 | orchestrator | 2025-05-19 20:02:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:31.119997 | orchestrator | 2025-05-19 20:02:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:34.165660 | orchestrator | 2025-05-19 20:02:34 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:34.165791 | orchestrator | 2025-05-19 20:02:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:34.166008 | orchestrator | 2025-05-19 20:02:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:37.219045 | orchestrator | 2025-05-19 20:02:37 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:37.221037 | orchestrator | 2025-05-19 20:02:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:37.221085 | orchestrator | 2025-05-19 20:02:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:40.267903 | orchestrator | 2025-05-19 20:02:40 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:40.270190 | orchestrator | 2025-05-19 20:02:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:40.270282 | orchestrator | 2025-05-19 20:02:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:43.317788 | orchestrator | 2025-05-19 20:02:43 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:43.318258 | orchestrator | 2025-05-19 20:02:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:43.318387 | orchestrator | 2025-05-19 20:02:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:46.360235 | orchestrator | 2025-05-19 20:02:46 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:46.361099 | orchestrator | 2025-05-19 20:02:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:46.361320 | orchestrator | 2025-05-19 20:02:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:49.413344 | orchestrator | 2025-05-19 20:02:49 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:49.415029 | orchestrator | 2025-05-19 20:02:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:49.415075 | orchestrator | 2025-05-19 20:02:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:52.468525 | orchestrator | 2025-05-19 20:02:52 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:52.470271 | orchestrator | 2025-05-19 20:02:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:52.470316 | orchestrator | 2025-05-19 20:02:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:55.525955 | orchestrator | 2025-05-19 20:02:55 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:55.527811 | orchestrator | 2025-05-19 20:02:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:55.527880 | orchestrator | 2025-05-19 20:02:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:02:58.580443 | orchestrator | 2025-05-19 20:02:58 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:02:58.582531 | orchestrator | 2025-05-19 20:02:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:02:58.582743 | orchestrator | 2025-05-19 20:02:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:01.626174 | orchestrator | 2025-05-19 20:03:01 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:01.627410 | orchestrator | 2025-05-19 20:03:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:01.627481 | orchestrator | 2025-05-19 20:03:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:04.688552 | orchestrator | 2025-05-19 20:03:04 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:04.689098 | orchestrator | 2025-05-19 20:03:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:04.689145 | orchestrator | 2025-05-19 20:03:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:07.755349 | orchestrator | 2025-05-19 20:03:07 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:07.756745 | orchestrator | 2025-05-19 20:03:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:07.756803 | orchestrator | 2025-05-19 20:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:10.814636 | orchestrator | 2025-05-19 20:03:10 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:10.814775 | orchestrator | 2025-05-19 20:03:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:10.814848 | orchestrator | 2025-05-19 20:03:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:13.869993 | orchestrator | 2025-05-19 20:03:13 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:13.873035 | orchestrator | 2025-05-19 20:03:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:13.873096 | orchestrator | 2025-05-19 20:03:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:16.928303 | orchestrator | 2025-05-19 20:03:16 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:16.931720 | orchestrator | 2025-05-19 20:03:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:16.931823 | orchestrator | 2025-05-19 20:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:19.974942 | orchestrator | 2025-05-19 20:03:19 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:19.976022 | orchestrator | 2025-05-19 20:03:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:19.977360 | orchestrator | 2025-05-19 20:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:23.031471 | orchestrator | 2025-05-19 20:03:23 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:23.033289 | orchestrator | 2025-05-19 20:03:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:23.033350 | orchestrator | 2025-05-19 20:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:26.080883 | orchestrator | 2025-05-19 20:03:26 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:26.082262 | orchestrator | 2025-05-19 20:03:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:26.082293 | orchestrator | 2025-05-19 20:03:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:29.126051 | orchestrator | 2025-05-19 20:03:29 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:29.127577 | orchestrator | 2025-05-19 20:03:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:29.127601 | orchestrator | 2025-05-19 20:03:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:32.190128 | orchestrator | 2025-05-19 20:03:32 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:32.191487 | orchestrator | 2025-05-19 20:03:32 | INFO  | Task 9e9ff5b0-8a4b-4c2d-bd18-70ab50ec86ab is in state STARTED 2025-05-19 20:03:32.192791 | orchestrator | 2025-05-19 20:03:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:32.192819 | orchestrator | 2025-05-19 20:03:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:35.264574 | orchestrator | 2025-05-19 20:03:35 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:35.264774 | orchestrator | 2025-05-19 20:03:35 | INFO  | Task 9e9ff5b0-8a4b-4c2d-bd18-70ab50ec86ab is in state STARTED 2025-05-19 20:03:35.264800 | orchestrator | 2025-05-19 20:03:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:35.264820 | orchestrator | 2025-05-19 20:03:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:38.312222 | orchestrator | 2025-05-19 20:03:38 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:38.312600 | orchestrator | 2025-05-19 20:03:38 | INFO  | Task 9e9ff5b0-8a4b-4c2d-bd18-70ab50ec86ab is in state STARTED 2025-05-19 20:03:38.315019 | orchestrator | 2025-05-19 20:03:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:38.315067 | orchestrator | 2025-05-19 20:03:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:41.369143 | orchestrator | 2025-05-19 20:03:41 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:41.370998 | orchestrator | 2025-05-19 20:03:41 | INFO  | Task 9e9ff5b0-8a4b-4c2d-bd18-70ab50ec86ab is in state STARTED 2025-05-19 20:03:41.373320 | orchestrator | 2025-05-19 20:03:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:41.373705 | orchestrator | 2025-05-19 20:03:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:44.427563 | orchestrator | 2025-05-19 20:03:44 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:44.430066 | orchestrator | 2025-05-19 20:03:44 | INFO  | Task 9e9ff5b0-8a4b-4c2d-bd18-70ab50ec86ab is in state SUCCESS 2025-05-19 20:03:44.431366 | orchestrator | 2025-05-19 20:03:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:44.431407 | orchestrator | 2025-05-19 20:03:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:47.491854 | orchestrator | 2025-05-19 20:03:47 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:47.493475 | orchestrator | 2025-05-19 20:03:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:47.493605 | orchestrator | 2025-05-19 20:03:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:50.551672 | orchestrator | 2025-05-19 20:03:50 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:50.554728 | orchestrator | 2025-05-19 20:03:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:50.554985 | orchestrator | 2025-05-19 20:03:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:53.610265 | orchestrator | 2025-05-19 20:03:53 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:53.611785 | orchestrator | 2025-05-19 20:03:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:53.611834 | orchestrator | 2025-05-19 20:03:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:56.668149 | orchestrator | 2025-05-19 20:03:56 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:56.669434 | orchestrator | 2025-05-19 20:03:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:56.669497 | orchestrator | 2025-05-19 20:03:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:03:59.723730 | orchestrator | 2025-05-19 20:03:59 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:03:59.725547 | orchestrator | 2025-05-19 20:03:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:03:59.725814 | orchestrator | 2025-05-19 20:03:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:02.783458 | orchestrator | 2025-05-19 20:04:02 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:02.783780 | orchestrator | 2025-05-19 20:04:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:02.783804 | orchestrator | 2025-05-19 20:04:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:05.861801 | orchestrator | 2025-05-19 20:04:05 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:05.867225 | orchestrator | 2025-05-19 20:04:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:05.867307 | orchestrator | 2025-05-19 20:04:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:08.916389 | orchestrator | 2025-05-19 20:04:08 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:08.917232 | orchestrator | 2025-05-19 20:04:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:08.917366 | orchestrator | 2025-05-19 20:04:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:11.967449 | orchestrator | 2025-05-19 20:04:11 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:11.967751 | orchestrator | 2025-05-19 20:04:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:11.967776 | orchestrator | 2025-05-19 20:04:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:15.016134 | orchestrator | 2025-05-19 20:04:15 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:15.017759 | orchestrator | 2025-05-19 20:04:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:15.017813 | orchestrator | 2025-05-19 20:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:18.070299 | orchestrator | 2025-05-19 20:04:18 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:18.072474 | orchestrator | 2025-05-19 20:04:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:18.072522 | orchestrator | 2025-05-19 20:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:21.120915 | orchestrator | 2025-05-19 20:04:21 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:21.121006 | orchestrator | 2025-05-19 20:04:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:21.121018 | orchestrator | 2025-05-19 20:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:24.162550 | orchestrator | 2025-05-19 20:04:24 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:24.163767 | orchestrator | 2025-05-19 20:04:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:24.163944 | orchestrator | 2025-05-19 20:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:27.212199 | orchestrator | 2025-05-19 20:04:27 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:27.212317 | orchestrator | 2025-05-19 20:04:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:27.212332 | orchestrator | 2025-05-19 20:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:30.261290 | orchestrator | 2025-05-19 20:04:30 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:30.262980 | orchestrator | 2025-05-19 20:04:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:30.263025 | orchestrator | 2025-05-19 20:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:33.326409 | orchestrator | 2025-05-19 20:04:33 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:33.328286 | orchestrator | 2025-05-19 20:04:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:33.328333 | orchestrator | 2025-05-19 20:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:36.392724 | orchestrator | 2025-05-19 20:04:36 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:36.394291 | orchestrator | 2025-05-19 20:04:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:36.394349 | orchestrator | 2025-05-19 20:04:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:39.436395 | orchestrator | 2025-05-19 20:04:39 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:39.436724 | orchestrator | 2025-05-19 20:04:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:39.436753 | orchestrator | 2025-05-19 20:04:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:42.492971 | orchestrator | 2025-05-19 20:04:42 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:42.495973 | orchestrator | 2025-05-19 20:04:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:42.496055 | orchestrator | 2025-05-19 20:04:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:45.540776 | orchestrator | 2025-05-19 20:04:45 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:45.541438 | orchestrator | 2025-05-19 20:04:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:45.541460 | orchestrator | 2025-05-19 20:04:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:48.590418 | orchestrator | 2025-05-19 20:04:48 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:48.593203 | orchestrator | 2025-05-19 20:04:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:48.593265 | orchestrator | 2025-05-19 20:04:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:51.645804 | orchestrator | 2025-05-19 20:04:51 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:51.647034 | orchestrator | 2025-05-19 20:04:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:51.647090 | orchestrator | 2025-05-19 20:04:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:54.697940 | orchestrator | 2025-05-19 20:04:54 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:54.698099 | orchestrator | 2025-05-19 20:04:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:54.698117 | orchestrator | 2025-05-19 20:04:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:04:57.744644 | orchestrator | 2025-05-19 20:04:57 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:04:57.745299 | orchestrator | 2025-05-19 20:04:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:04:57.745340 | orchestrator | 2025-05-19 20:04:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:00.790105 | orchestrator | 2025-05-19 20:05:00 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:00.790562 | orchestrator | 2025-05-19 20:05:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:00.790603 | orchestrator | 2025-05-19 20:05:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:03.852273 | orchestrator | 2025-05-19 20:05:03 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:03.854221 | orchestrator | 2025-05-19 20:05:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:03.854318 | orchestrator | 2025-05-19 20:05:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:06.905382 | orchestrator | 2025-05-19 20:05:06 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:06.906257 | orchestrator | 2025-05-19 20:05:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:06.906349 | orchestrator | 2025-05-19 20:05:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:09.958694 | orchestrator | 2025-05-19 20:05:09 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:09.960033 | orchestrator | 2025-05-19 20:05:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:09.960146 | orchestrator | 2025-05-19 20:05:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:13.017799 | orchestrator | 2025-05-19 20:05:13 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:13.018558 | orchestrator | 2025-05-19 20:05:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:13.018588 | orchestrator | 2025-05-19 20:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:16.082476 | orchestrator | 2025-05-19 20:05:16 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:16.083129 | orchestrator | 2025-05-19 20:05:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:16.083259 | orchestrator | 2025-05-19 20:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:19.133006 | orchestrator | 2025-05-19 20:05:19 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:19.133135 | orchestrator | 2025-05-19 20:05:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:19.133151 | orchestrator | 2025-05-19 20:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:22.167050 | orchestrator | 2025-05-19 20:05:22 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:22.168638 | orchestrator | 2025-05-19 20:05:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:22.168679 | orchestrator | 2025-05-19 20:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:25.199401 | orchestrator | 2025-05-19 20:05:25 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:25.199693 | orchestrator | 2025-05-19 20:05:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:25.199725 | orchestrator | 2025-05-19 20:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:28.245087 | orchestrator | 2025-05-19 20:05:28 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:28.245346 | orchestrator | 2025-05-19 20:05:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:28.245367 | orchestrator | 2025-05-19 20:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:31.298128 | orchestrator | 2025-05-19 20:05:31 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:31.300655 | orchestrator | 2025-05-19 20:05:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:31.301023 | orchestrator | 2025-05-19 20:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:34.352797 | orchestrator | 2025-05-19 20:05:34 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:34.354010 | orchestrator | 2025-05-19 20:05:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:34.354131 | orchestrator | 2025-05-19 20:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:37.398397 | orchestrator | 2025-05-19 20:05:37 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:37.399510 | orchestrator | 2025-05-19 20:05:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:37.399547 | orchestrator | 2025-05-19 20:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:40.448202 | orchestrator | 2025-05-19 20:05:40 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:40.449061 | orchestrator | 2025-05-19 20:05:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:40.449089 | orchestrator | 2025-05-19 20:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:43.510364 | orchestrator | 2025-05-19 20:05:43 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:43.513056 | orchestrator | 2025-05-19 20:05:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:43.513143 | orchestrator | 2025-05-19 20:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:46.557235 | orchestrator | 2025-05-19 20:05:46 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:46.558819 | orchestrator | 2025-05-19 20:05:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:46.558886 | orchestrator | 2025-05-19 20:05:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:49.607920 | orchestrator | 2025-05-19 20:05:49 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:49.610071 | orchestrator | 2025-05-19 20:05:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:49.610256 | orchestrator | 2025-05-19 20:05:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:52.664380 | orchestrator | 2025-05-19 20:05:52 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:52.664868 | orchestrator | 2025-05-19 20:05:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:52.664914 | orchestrator | 2025-05-19 20:05:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:55.715597 | orchestrator | 2025-05-19 20:05:55 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:55.716438 | orchestrator | 2025-05-19 20:05:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:55.716515 | orchestrator | 2025-05-19 20:05:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:05:58.773003 | orchestrator | 2025-05-19 20:05:58 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state STARTED 2025-05-19 20:05:58.774337 | orchestrator | 2025-05-19 20:05:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:05:58.774370 | orchestrator | 2025-05-19 20:05:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:01.840882 | orchestrator | 2025-05-19 20:06:01.841028 | orchestrator | None 2025-05-19 20:06:01.841051 | orchestrator | 2025-05-19 20:06:01.841067 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 20:06:01.841186 | orchestrator | 2025-05-19 20:06:01.841199 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-19 20:06:01.841209 | orchestrator | Monday 19 May 2025 19:57:15 +0000 (0:00:00.790) 0:00:00.790 ************ 2025-05-19 20:06:01.841218 | orchestrator | changed: [testbed-manager] 2025-05-19 20:06:01.841229 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.841239 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.841247 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.841256 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.841265 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.841274 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.841325 | orchestrator | 2025-05-19 20:06:01.841334 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 20:06:01.841343 | orchestrator | Monday 19 May 2025 19:57:16 +0000 (0:00:01.173) 0:00:01.964 ************ 2025-05-19 20:06:01.841356 | orchestrator | changed: [testbed-manager] 2025-05-19 20:06:01.841370 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.841413 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.841428 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.841496 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.841508 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.841518 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.841528 | orchestrator | 2025-05-19 20:06:01.841539 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 20:06:01.841549 | orchestrator | Monday 19 May 2025 19:57:17 +0000 (0:00:01.388) 0:00:03.353 ************ 2025-05-19 20:06:01.841560 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-19 20:06:01.841570 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-19 20:06:01.841580 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-19 20:06:01.841591 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-19 20:06:01.841601 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-19 20:06:01.841611 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-19 20:06:01.841621 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-19 20:06:01.841630 | orchestrator | 2025-05-19 20:06:01.841640 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-19 20:06:01.841650 | orchestrator | 2025-05-19 20:06:01.841660 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-19 20:06:01.841670 | orchestrator | Monday 19 May 2025 19:57:19 +0000 (0:00:01.482) 0:00:04.835 ************ 2025-05-19 20:06:01.841694 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.841704 | orchestrator | 2025-05-19 20:06:01.841714 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-19 20:06:01.841724 | orchestrator | Monday 19 May 2025 19:57:19 +0000 (0:00:00.830) 0:00:05.665 ************ 2025-05-19 20:06:01.841736 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-19 20:06:01.841747 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-19 20:06:01.841758 | orchestrator | 2025-05-19 20:06:01.841767 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-19 20:06:01.841777 | orchestrator | Monday 19 May 2025 19:57:24 +0000 (0:00:04.815) 0:00:10.481 ************ 2025-05-19 20:06:01.841787 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 20:06:01.841797 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 20:06:01.841808 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.841818 | orchestrator | 2025-05-19 20:06:01.841827 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-19 20:06:01.841836 | orchestrator | Monday 19 May 2025 19:57:29 +0000 (0:00:04.939) 0:00:15.421 ************ 2025-05-19 20:06:01.841844 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.841852 | orchestrator | 2025-05-19 20:06:01.841861 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-19 20:06:01.841869 | orchestrator | Monday 19 May 2025 19:57:30 +0000 (0:00:00.750) 0:00:16.171 ************ 2025-05-19 20:06:01.841878 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.841897 | orchestrator | 2025-05-19 20:06:01.841906 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-19 20:06:01.841915 | orchestrator | Monday 19 May 2025 19:57:32 +0000 (0:00:01.664) 0:00:17.836 ************ 2025-05-19 20:06:01.841923 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.841932 | orchestrator | 2025-05-19 20:06:01.841940 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 20:06:01.841949 | orchestrator | Monday 19 May 2025 19:57:37 +0000 (0:00:05.155) 0:00:22.991 ************ 2025-05-19 20:06:01.841958 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.841966 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.841975 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.841983 | orchestrator | 2025-05-19 20:06:01.842000 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-19 20:06:01.842009 | orchestrator | Monday 19 May 2025 19:57:38 +0000 (0:00:00.780) 0:00:23.771 ************ 2025-05-19 20:06:01.842065 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.842075 | orchestrator | 2025-05-19 20:06:01.842084 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-19 20:06:01.842093 | orchestrator | Monday 19 May 2025 19:58:08 +0000 (0:00:30.906) 0:00:54.678 ************ 2025-05-19 20:06:01.842101 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.842110 | orchestrator | 2025-05-19 20:06:01.842194 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 20:06:01.842207 | orchestrator | Monday 19 May 2025 19:58:23 +0000 (0:00:14.161) 0:01:08.840 ************ 2025-05-19 20:06:01.842215 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.842224 | orchestrator | 2025-05-19 20:06:01.842233 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 20:06:01.842242 | orchestrator | Monday 19 May 2025 19:58:34 +0000 (0:00:10.849) 0:01:19.689 ************ 2025-05-19 20:06:01.842270 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.842280 | orchestrator | 2025-05-19 20:06:01.842289 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-19 20:06:01.842298 | orchestrator | Monday 19 May 2025 19:58:35 +0000 (0:00:01.600) 0:01:21.290 ************ 2025-05-19 20:06:01.842307 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.842316 | orchestrator | 2025-05-19 20:06:01.842325 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 20:06:01.842334 | orchestrator | Monday 19 May 2025 19:58:36 +0000 (0:00:00.801) 0:01:22.092 ************ 2025-05-19 20:06:01.842344 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.842354 | orchestrator | 2025-05-19 20:06:01.842363 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-19 20:06:01.842372 | orchestrator | Monday 19 May 2025 19:58:37 +0000 (0:00:00.936) 0:01:23.029 ************ 2025-05-19 20:06:01.842381 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.842390 | orchestrator | 2025-05-19 20:06:01.842407 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-19 20:06:01.842421 | orchestrator | Monday 19 May 2025 19:58:54 +0000 (0:00:17.236) 0:01:40.265 ************ 2025-05-19 20:06:01.842434 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.842469 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842484 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.842498 | orchestrator | 2025-05-19 20:06:01.842513 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-19 20:06:01.842528 | orchestrator | 2025-05-19 20:06:01.842542 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-19 20:06:01.842558 | orchestrator | Monday 19 May 2025 19:58:55 +0000 (0:00:00.568) 0:01:40.834 ************ 2025-05-19 20:06:01.842574 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.842590 | orchestrator | 2025-05-19 20:06:01.842604 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-19 20:06:01.842618 | orchestrator | Monday 19 May 2025 19:58:56 +0000 (0:00:00.949) 0:01:41.783 ************ 2025-05-19 20:06:01.842632 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842641 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.842650 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.842659 | orchestrator | 2025-05-19 20:06:01.842668 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-19 20:06:01.842676 | orchestrator | Monday 19 May 2025 19:58:58 +0000 (0:00:02.559) 0:01:44.342 ************ 2025-05-19 20:06:01.842685 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842693 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.842710 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.842728 | orchestrator | 2025-05-19 20:06:01.842737 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-19 20:06:01.842746 | orchestrator | Monday 19 May 2025 19:59:01 +0000 (0:00:02.434) 0:01:46.777 ************ 2025-05-19 20:06:01.842775 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.842784 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842793 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.842801 | orchestrator | 2025-05-19 20:06:01.842810 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-19 20:06:01.842819 | orchestrator | Monday 19 May 2025 19:59:01 +0000 (0:00:00.505) 0:01:47.283 ************ 2025-05-19 20:06:01.842828 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 20:06:01.842837 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.842846 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 20:06:01.842854 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842863 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-19 20:06:01.842872 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-19 20:06:01.842880 | orchestrator | 2025-05-19 20:06:01.842889 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-19 20:06:01.842898 | orchestrator | Monday 19 May 2025 19:59:10 +0000 (0:00:08.703) 0:01:55.986 ************ 2025-05-19 20:06:01.842906 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.842915 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842923 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.842933 | orchestrator | 2025-05-19 20:06:01.842941 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-19 20:06:01.842950 | orchestrator | Monday 19 May 2025 19:59:10 +0000 (0:00:00.663) 0:01:56.649 ************ 2025-05-19 20:06:01.842959 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 20:06:01.842967 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.842976 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 20:06:01.842984 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.842993 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 20:06:01.843002 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843010 | orchestrator | 2025-05-19 20:06:01.843019 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-19 20:06:01.843028 | orchestrator | Monday 19 May 2025 19:59:12 +0000 (0:00:01.338) 0:01:57.987 ************ 2025-05-19 20:06:01.843036 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843045 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843063 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.843072 | orchestrator | 2025-05-19 20:06:01.843081 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-19 20:06:01.843090 | orchestrator | Monday 19 May 2025 19:59:12 +0000 (0:00:00.547) 0:01:58.535 ************ 2025-05-19 20:06:01.843098 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843107 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843115 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.843124 | orchestrator | 2025-05-19 20:06:01.843133 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-19 20:06:01.843141 | orchestrator | Monday 19 May 2025 19:59:13 +0000 (0:00:01.125) 0:01:59.661 ************ 2025-05-19 20:06:01.843150 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843167 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843176 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.843185 | orchestrator | 2025-05-19 20:06:01.843193 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-19 20:06:01.843202 | orchestrator | Monday 19 May 2025 19:59:16 +0000 (0:00:02.450) 0:02:02.112 ************ 2025-05-19 20:06:01.843211 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843219 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843228 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.843244 | orchestrator | 2025-05-19 20:06:01.843253 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 20:06:01.843262 | orchestrator | Monday 19 May 2025 19:59:37 +0000 (0:00:20.629) 0:02:22.741 ************ 2025-05-19 20:06:01.843270 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843323 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843356 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.843365 | orchestrator | 2025-05-19 20:06:01.843374 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 20:06:01.843383 | orchestrator | Monday 19 May 2025 19:59:48 +0000 (0:00:11.028) 0:02:33.769 ************ 2025-05-19 20:06:01.843423 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.843432 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843505 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843520 | orchestrator | 2025-05-19 20:06:01.843533 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-19 20:06:01.843661 | orchestrator | Monday 19 May 2025 19:59:49 +0000 (0:00:01.260) 0:02:35.030 ************ 2025-05-19 20:06:01.843676 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843690 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843703 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.843717 | orchestrator | 2025-05-19 20:06:01.843731 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-19 20:06:01.843744 | orchestrator | Monday 19 May 2025 20:00:00 +0000 (0:00:11.430) 0:02:46.461 ************ 2025-05-19 20:06:01.843758 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.843774 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843786 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843799 | orchestrator | 2025-05-19 20:06:01.843812 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-19 20:06:01.843928 | orchestrator | Monday 19 May 2025 20:00:02 +0000 (0:00:01.382) 0:02:47.844 ************ 2025-05-19 20:06:01.843941 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.843950 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.843959 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.843967 | orchestrator | 2025-05-19 20:06:01.843976 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-19 20:06:01.843985 | orchestrator | 2025-05-19 20:06:01.844002 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 20:06:01.844011 | orchestrator | Monday 19 May 2025 20:00:02 +0000 (0:00:00.492) 0:02:48.337 ************ 2025-05-19 20:06:01.844020 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.844030 | orchestrator | 2025-05-19 20:06:01.844039 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-19 20:06:01.844047 | orchestrator | Monday 19 May 2025 20:00:03 +0000 (0:00:00.665) 0:02:49.002 ************ 2025-05-19 20:06:01.844056 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-19 20:06:01.844078 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-19 20:06:01.844086 | orchestrator | 2025-05-19 20:06:01.844094 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-19 20:06:01.844102 | orchestrator | Monday 19 May 2025 20:00:06 +0000 (0:00:03.561) 0:02:52.564 ************ 2025-05-19 20:06:01.844110 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-19 20:06:01.844121 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-19 20:06:01.844129 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-19 20:06:01.844137 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-19 20:06:01.844155 | orchestrator | 2025-05-19 20:06:01.844163 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-19 20:06:01.844171 | orchestrator | Monday 19 May 2025 20:00:13 +0000 (0:00:06.891) 0:02:59.456 ************ 2025-05-19 20:06:01.844179 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 20:06:01.844187 | orchestrator | 2025-05-19 20:06:01.844195 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-19 20:06:01.844203 | orchestrator | Monday 19 May 2025 20:00:17 +0000 (0:00:03.398) 0:03:02.854 ************ 2025-05-19 20:06:01.844211 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 20:06:01.844219 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-19 20:06:01.844227 | orchestrator | 2025-05-19 20:06:01.844234 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-19 20:06:01.844242 | orchestrator | Monday 19 May 2025 20:00:21 +0000 (0:00:04.149) 0:03:07.004 ************ 2025-05-19 20:06:01.844250 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 20:06:01.844258 | orchestrator | 2025-05-19 20:06:01.844266 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-19 20:06:01.844274 | orchestrator | Monday 19 May 2025 20:00:24 +0000 (0:00:03.593) 0:03:10.597 ************ 2025-05-19 20:06:01.844332 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-19 20:06:01.844340 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-19 20:06:01.844348 | orchestrator | 2025-05-19 20:06:01.844356 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-19 20:06:01.844375 | orchestrator | Monday 19 May 2025 20:00:33 +0000 (0:00:08.373) 0:03:18.970 ************ 2025-05-19 20:06:01.844389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.844407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.844424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.844491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.844504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.844513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.844527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.844536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.844550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.844559 | orchestrator | 2025-05-19 20:06:01.844567 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-19 20:06:01.844575 | orchestrator | Monday 19 May 2025 20:00:34 +0000 (0:00:01.461) 0:03:20.432 ************ 2025-05-19 20:06:01.844583 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.844591 | orchestrator | 2025-05-19 20:06:01.844622 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-19 20:06:01.844630 | orchestrator | Monday 19 May 2025 20:00:35 +0000 (0:00:00.320) 0:03:20.753 ************ 2025-05-19 20:06:01.844636 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.844654 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.844661 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.844667 | orchestrator | 2025-05-19 20:06:01.844687 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-19 20:06:01.844694 | orchestrator | Monday 19 May 2025 20:00:35 +0000 (0:00:00.300) 0:03:21.054 ************ 2025-05-19 20:06:01.844701 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 20:06:01.844707 | orchestrator | 2025-05-19 20:06:01.844719 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-19 20:06:01.844726 | orchestrator | Monday 19 May 2025 20:00:35 +0000 (0:00:00.601) 0:03:21.655 ************ 2025-05-19 20:06:01.844732 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.844739 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.844746 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.844752 | orchestrator | 2025-05-19 20:06:01.844759 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 20:06:01.844766 | orchestrator | Monday 19 May 2025 20:00:36 +0000 (0:00:00.335) 0:03:21.990 ************ 2025-05-19 20:06:01.844773 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.844779 | orchestrator | 2025-05-19 20:06:01.844786 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-19 20:06:01.844793 | orchestrator | Monday 19 May 2025 20:00:37 +0000 (0:00:00.875) 0:03:22.866 ************ 2025-05-19 20:06:01.844804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.844817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.844831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.844839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.844847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.844866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.844874 | orchestrator | 2025-05-19 20:06:01.844881 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-19 20:06:01.844888 | orchestrator | Monday 19 May 2025 20:00:39 +0000 (0:00:02.645) 0:03:25.511 ************ 2025-05-19 20:06:01.844895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.844903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.844915 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.844922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.844940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.844947 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.844954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.844962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.844968 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.844975 | orchestrator | 2025-05-19 20:06:01.844982 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-19 20:06:01.844989 | orchestrator | Monday 19 May 2025 20:00:40 +0000 (0:00:00.594) 0:03:26.106 ************ 2025-05-19 20:06:01.845001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/r2025-05-19 20:06:01 | INFO  | Task e7f1f7b2-7b43-4cb7-a8f7-cbd7bbeff0a7 is in state SUCCESS 2025-05-19 20:06:01.845009 | orchestrator | elease/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.845022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845028 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.845039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.845047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845054 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.845068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.845081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845088 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.845095 | orchestrator | 2025-05-19 20:06:01.845102 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-19 20:06:01.845109 | orchestrator | Monday 19 May 2025 20:00:41 +0000 (0:00:01.275) 0:03:27.381 ************ 2025-05-19 20:06:01.845119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.845165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.845179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.845205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845212 | orchestrator | 2025-05-19 20:06:01.845219 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-19 20:06:01.845226 | orchestrator | Monday 19 May 2025 20:00:44 +0000 (0:00:02.752) 0:03:30.134 ************ 2025-05-19 20:06:01.845237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.845281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.845296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.845316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845327 | orchestrator | 2025-05-19 20:06:01.845334 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-19 20:06:01.845341 | orchestrator | Monday 19 May 2025 20:00:50 +0000 (0:00:06.476) 0:03:36.610 ************ 2025-05-19 20:06:01.845348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.845361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845375 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.845387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.845399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845413 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.845424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 20:06:01.845432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.845466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.845473 | orchestrator | 2025-05-19 20:06:01.845480 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-19 20:06:01.845486 | orchestrator | Monday 19 May 2025 20:00:51 +0000 (0:00:00.813) 0:03:37.424 ************ 2025-05-19 20:06:01.845493 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.845500 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.845507 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.845513 | orchestrator | 2025-05-19 20:06:01.845524 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-19 20:06:01.845531 | orchestrator | Monday 19 May 2025 20:00:53 +0000 (0:00:01.769) 0:03:39.193 ************ 2025-05-19 20:06:01.845538 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.845545 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.845551 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.845558 | orchestrator | 2025-05-19 20:06:01.845564 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-19 20:06:01.845571 | orchestrator | Monday 19 May 2025 20:00:54 +0000 (0:00:00.513) 0:03:39.707 ************ 2025-05-19 20:06:01.845578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.845601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 20:06:01.846249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.846272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.846280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.846293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.846301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.846315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.846322 | orchestrator | 2025-05-19 20:06:01.846329 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 20:06:01.846337 | orchestrator | Monday 19 May 2025 20:00:56 +0000 (0:00:02.019) 0:03:41.727 ************ 2025-05-19 20:06:01.846343 | orchestrator | 2025-05-19 20:06:01.846350 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 20:06:01.846357 | orchestrator | Monday 19 May 2025 20:00:56 +0000 (0:00:00.301) 0:03:42.028 ************ 2025-05-19 20:06:01.846364 | orchestrator | 2025-05-19 20:06:01.846370 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 20:06:01.846377 | orchestrator | Monday 19 May 2025 20:00:56 +0000 (0:00:00.114) 0:03:42.143 ************ 2025-05-19 20:06:01.846384 | orchestrator | 2025-05-19 20:06:01.846395 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-19 20:06:01.846402 | orchestrator | Monday 19 May 2025 20:00:56 +0000 (0:00:00.297) 0:03:42.441 ************ 2025-05-19 20:06:01.846430 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.846456 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.846463 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.846470 | orchestrator | 2025-05-19 20:06:01.846476 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-19 20:06:01.846483 | orchestrator | Monday 19 May 2025 20:01:13 +0000 (0:00:16.386) 0:03:58.827 ************ 2025-05-19 20:06:01.846490 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.846496 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.846520 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.846526 | orchestrator | 2025-05-19 20:06:01.846533 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-19 20:06:01.846540 | orchestrator | 2025-05-19 20:06:01.846547 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 20:06:01.846553 | orchestrator | Monday 19 May 2025 20:01:24 +0000 (0:00:11.150) 0:04:09.978 ************ 2025-05-19 20:06:01.846560 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.846567 | orchestrator | 2025-05-19 20:06:01.846574 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 20:06:01.846581 | orchestrator | Monday 19 May 2025 20:01:26 +0000 (0:00:01.714) 0:04:11.693 ************ 2025-05-19 20:06:01.846587 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.846594 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.846601 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.846648 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.846655 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.846661 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.846668 | orchestrator | 2025-05-19 20:06:01.846675 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-19 20:06:01.846681 | orchestrator | Monday 19 May 2025 20:01:26 +0000 (0:00:00.805) 0:04:12.498 ************ 2025-05-19 20:06:01.846688 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.846701 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.846708 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.846714 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 20:06:01.846721 | orchestrator | 2025-05-19 20:06:01.846728 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 20:06:01.846739 | orchestrator | Monday 19 May 2025 20:01:27 +0000 (0:00:01.078) 0:04:13.576 ************ 2025-05-19 20:06:01.846746 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-19 20:06:01.846753 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-19 20:06:01.846760 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-19 20:06:01.846766 | orchestrator | 2025-05-19 20:06:01.846773 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 20:06:01.846780 | orchestrator | Monday 19 May 2025 20:01:28 +0000 (0:00:00.939) 0:04:14.516 ************ 2025-05-19 20:06:01.846786 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-19 20:06:01.846793 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-19 20:06:01.846800 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-19 20:06:01.846807 | orchestrator | 2025-05-19 20:06:01.846813 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 20:06:01.846820 | orchestrator | Monday 19 May 2025 20:01:30 +0000 (0:00:01.400) 0:04:15.916 ************ 2025-05-19 20:06:01.846827 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-19 20:06:01.846833 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.846840 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-19 20:06:01.846847 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.846853 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-19 20:06:01.846860 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.846867 | orchestrator | 2025-05-19 20:06:01.846873 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-19 20:06:01.846880 | orchestrator | Monday 19 May 2025 20:01:30 +0000 (0:00:00.673) 0:04:16.589 ************ 2025-05-19 20:06:01.846887 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 20:06:01.846893 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 20:06:01.846900 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.846907 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 20:06:01.846913 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 20:06:01.846920 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 20:06:01.846926 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 20:06:01.846933 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 20:06:01.846940 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.846946 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 20:06:01.846953 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 20:06:01.846959 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.846966 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 20:06:01.847015 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 20:06:01.847022 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 20:06:01.847029 | orchestrator | 2025-05-19 20:06:01.847040 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-19 20:06:01.847047 | orchestrator | Monday 19 May 2025 20:01:32 +0000 (0:00:01.396) 0:04:17.986 ************ 2025-05-19 20:06:01.847059 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.847065 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.847072 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.847079 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.847085 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.847092 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.847098 | orchestrator | 2025-05-19 20:06:01.847105 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-19 20:06:01.847112 | orchestrator | Monday 19 May 2025 20:01:33 +0000 (0:00:01.242) 0:04:19.228 ************ 2025-05-19 20:06:01.847119 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.847125 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.847132 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.847138 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.847145 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.847151 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.847158 | orchestrator | 2025-05-19 20:06:01.847165 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-19 20:06:01.847171 | orchestrator | Monday 19 May 2025 20:01:35 +0000 (0:00:01.972) 0:04:21.201 ************ 2025-05-19 20:06:01.847182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.847224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.847232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.847284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.847320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.847342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.847349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.847373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.847410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.847417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.847428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.847553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.847623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847683 | orchestrator | 2025-05-19 20:06:01.847690 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 20:06:01.847697 | orchestrator | Monday 19 May 2025 20:01:38 +0000 (0:00:02.619) 0:04:23.820 ************ 2025-05-19 20:06:01.847704 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 20:06:01.847713 | orchestrator | 2025-05-19 20:06:01.847720 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-19 20:06:01.847726 | orchestrator | Monday 19 May 2025 20:01:39 +0000 (0:00:01.536) 0:04:25.357 ************ 2025-05-19 20:06:01.847737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847744 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847844 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.847871 | orchestrator | 2025-05-19 20:06:01.847877 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-19 20:06:01.847884 | orchestrator | Monday 19 May 2025 20:01:44 +0000 (0:00:04.348) 0:04:29.705 ************ 2025-05-19 20:06:01.847890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.847897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.847907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847914 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.847920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.847934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.847941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847947 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.847954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.847965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.847971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.847978 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.847987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.847998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848005 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.848011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.848018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848024 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.848034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.848041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848047 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.848053 | orchestrator | 2025-05-19 20:06:01.848060 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-19 20:06:01.848066 | orchestrator | Monday 19 May 2025 20:01:46 +0000 (0:00:02.039) 0:04:31.744 ************ 2025-05-19 20:06:01.848079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.848086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.848093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848099 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.848383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.848402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.848413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848432 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.848471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.848480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.848486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848492 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.848505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.848512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848524 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.848530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.848541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848547 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.848554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.848560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.848567 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.848573 | orchestrator | 2025-05-19 20:06:01.848579 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 20:06:01.848586 | orchestrator | Monday 19 May 2025 20:01:48 +0000 (0:00:02.560) 0:04:34.305 ************ 2025-05-19 20:06:01.848592 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.848599 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.848605 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.848611 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 20:06:01.848618 | orchestrator | 2025-05-19 20:06:01.848624 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-19 20:06:01.848630 | orchestrator | Monday 19 May 2025 20:01:49 +0000 (0:00:01.210) 0:04:35.515 ************ 2025-05-19 20:06:01.848644 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 20:06:01.848650 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 20:06:01.848656 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 20:06:01.848663 | orchestrator | 2025-05-19 20:06:01.848669 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-19 20:06:01.848675 | orchestrator | Monday 19 May 2025 20:01:50 +0000 (0:00:01.091) 0:04:36.607 ************ 2025-05-19 20:06:01.848681 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 20:06:01.848687 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 20:06:01.848693 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 20:06:01.848699 | orchestrator | 2025-05-19 20:06:01.848705 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-19 20:06:01.848711 | orchestrator | Monday 19 May 2025 20:01:51 +0000 (0:00:00.791) 0:04:37.398 ************ 2025-05-19 20:06:01.848718 | orchestrator | ok: [testbed-node-3] 2025-05-19 20:06:01.848724 | orchestrator | ok: [testbed-node-4] 2025-05-19 20:06:01.848730 | orchestrator | ok: [testbed-node-5] 2025-05-19 20:06:01.848736 | orchestrator | 2025-05-19 20:06:01.848742 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-19 20:06:01.848748 | orchestrator | Monday 19 May 2025 20:01:52 +0000 (0:00:00.696) 0:04:38.095 ************ 2025-05-19 20:06:01.848755 | orchestrator | ok: [testbed-node-3] 2025-05-19 20:06:01.848761 | orchestrator | ok: [testbed-node-4] 2025-05-19 20:06:01.848767 | orchestrator | ok: [testbed-node-5] 2025-05-19 20:06:01.848773 | orchestrator | 2025-05-19 20:06:01.848779 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-19 20:06:01.848806 | orchestrator | Monday 19 May 2025 20:01:52 +0000 (0:00:00.530) 0:04:38.625 ************ 2025-05-19 20:06:01.848812 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 20:06:01.848818 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 20:06:01.848824 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 20:06:01.848830 | orchestrator | 2025-05-19 20:06:01.848837 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-19 20:06:01.848843 | orchestrator | Monday 19 May 2025 20:01:54 +0000 (0:00:01.419) 0:04:40.044 ************ 2025-05-19 20:06:01.848849 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 20:06:01.848898 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 20:06:01.848905 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 20:06:01.848911 | orchestrator | 2025-05-19 20:06:01.848917 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-19 20:06:01.848927 | orchestrator | Monday 19 May 2025 20:01:55 +0000 (0:00:01.420) 0:04:41.465 ************ 2025-05-19 20:06:01.848933 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 20:06:01.848939 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 20:06:01.848957 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 20:06:01.848963 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-19 20:06:01.848969 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-19 20:06:01.848975 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-19 20:06:01.848981 | orchestrator | 2025-05-19 20:06:01.848988 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-19 20:06:01.848994 | orchestrator | Monday 19 May 2025 20:02:01 +0000 (0:00:05.609) 0:04:47.075 ************ 2025-05-19 20:06:01.849000 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.849006 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.849012 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.849018 | orchestrator | 2025-05-19 20:06:01.849026 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-19 20:06:01.849034 | orchestrator | Monday 19 May 2025 20:02:01 +0000 (0:00:00.491) 0:04:47.566 ************ 2025-05-19 20:06:01.849041 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.849058 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.849065 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.849072 | orchestrator | 2025-05-19 20:06:01.849080 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-19 20:06:01.849087 | orchestrator | Monday 19 May 2025 20:02:02 +0000 (0:00:00.498) 0:04:48.065 ************ 2025-05-19 20:06:01.849094 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.849101 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.849108 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.849115 | orchestrator | 2025-05-19 20:06:01.849122 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-19 20:06:01.849129 | orchestrator | Monday 19 May 2025 20:02:03 +0000 (0:00:01.405) 0:04:49.471 ************ 2025-05-19 20:06:01.849137 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 20:06:01.849145 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 20:06:01.849152 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 20:06:01.849159 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 20:06:01.849166 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 20:06:01.849174 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 20:06:01.849181 | orchestrator | 2025-05-19 20:06:01.849188 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-19 20:06:01.849200 | orchestrator | Monday 19 May 2025 20:02:07 +0000 (0:00:03.566) 0:04:53.037 ************ 2025-05-19 20:06:01.849208 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 20:06:01.849215 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 20:06:01.849222 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 20:06:01.849230 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 20:06:01.849237 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.849243 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 20:06:01.849249 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.849255 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 20:06:01.849262 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.849268 | orchestrator | 2025-05-19 20:06:01.849274 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-19 20:06:01.849280 | orchestrator | Monday 19 May 2025 20:02:10 +0000 (0:00:03.549) 0:04:56.587 ************ 2025-05-19 20:06:01.849286 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.849292 | orchestrator | 2025-05-19 20:06:01.849298 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-19 20:06:01.849304 | orchestrator | Monday 19 May 2025 20:02:11 +0000 (0:00:00.132) 0:04:56.719 ************ 2025-05-19 20:06:01.849310 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.849316 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.849322 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.849328 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.849334 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.849341 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.849347 | orchestrator | 2025-05-19 20:06:01.849353 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-19 20:06:01.849359 | orchestrator | Monday 19 May 2025 20:02:12 +0000 (0:00:00.995) 0:04:57.715 ************ 2025-05-19 20:06:01.849365 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 20:06:01.849376 | orchestrator | 2025-05-19 20:06:01.849382 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-19 20:06:01.849388 | orchestrator | Monday 19 May 2025 20:02:12 +0000 (0:00:00.384) 0:04:58.099 ************ 2025-05-19 20:06:01.849395 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.849401 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.849407 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.849413 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.849419 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.849425 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.849431 | orchestrator | 2025-05-19 20:06:01.849489 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-19 20:06:01.849497 | orchestrator | Monday 19 May 2025 20:02:13 +0000 (0:00:00.798) 0:04:58.898 ************ 2025-05-19 20:06:01.849506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.849534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.849545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.849563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.849578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.849601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.849618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.849690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.849754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.849789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849837 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.849847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.849874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.849906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.849916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.849975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.849992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850067 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850089 | orchestrator | 2025-05-19 20:06:01.850094 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-19 20:06:01.850101 | orchestrator | Monday 19 May 2025 20:02:17 +0000 (0:00:04.431) 0:05:03.329 ************ 2025-05-19 20:06:01.850107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.850116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.850122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.850146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.850158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.850167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.850188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.850208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.850221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.850258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.850282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.850294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.850304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.850315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.850331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.850408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.850469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.850500 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.850561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.850572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.850626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.850642 | orchestrator | 2025-05-19 20:06:01.850647 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-19 20:06:01.850653 | orchestrator | Monday 19 May 2025 20:02:24 +0000 (0:00:07.228) 0:05:10.557 ************ 2025-05-19 20:06:01.850659 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.850665 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.850671 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.850676 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.850681 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.850687 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.850692 | orchestrator | 2025-05-19 20:06:01.850698 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-19 20:06:01.850703 | orchestrator | Monday 19 May 2025 20:02:26 +0000 (0:00:01.565) 0:05:12.122 ************ 2025-05-19 20:06:01.850709 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 20:06:01.850715 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 20:06:01.850720 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 20:06:01.850725 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 20:06:01.850731 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.850741 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 20:06:01.850746 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.850752 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 20:06:01.850758 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 20:06:01.850764 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.850769 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 20:06:01.850775 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 20:06:01.850780 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 20:06:01.850786 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 20:06:01.850791 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 20:06:01.850797 | orchestrator | 2025-05-19 20:06:01.850802 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-19 20:06:01.850808 | orchestrator | Monday 19 May 2025 20:02:31 +0000 (0:00:05.151) 0:05:17.274 ************ 2025-05-19 20:06:01.850814 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.850819 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.850825 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.850830 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.850841 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.850846 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.850852 | orchestrator | 2025-05-19 20:06:01.850857 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-19 20:06:01.850863 | orchestrator | Monday 19 May 2025 20:02:32 +0000 (0:00:00.947) 0:05:18.221 ************ 2025-05-19 20:06:01.850869 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 20:06:01.850875 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 20:06:01.850884 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 20:06:01.850889 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 20:06:01.850895 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 20:06:01.850900 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 20:06:01.850906 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.850911 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 20:06:01.850917 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 20:06:01.850922 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 20:06:01.850928 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 20:06:01.850933 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.850939 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 20:06:01.850944 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 20:06:01.850949 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.850955 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 20:06:01.850960 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 20:06:01.850966 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 20:06:01.850971 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 20:06:01.850976 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 20:06:01.850982 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 20:06:01.850987 | orchestrator | 2025-05-19 20:06:01.850993 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-19 20:06:01.850998 | orchestrator | Monday 19 May 2025 20:02:40 +0000 (0:00:07.845) 0:05:26.067 ************ 2025-05-19 20:06:01.851004 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 20:06:01.851009 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 20:06:01.851029 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 20:06:01.851035 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 20:06:01.851040 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 20:06:01.851050 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 20:06:01.851055 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 20:06:01.851060 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 20:06:01.851066 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 20:06:01.851071 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 20:06:01.851077 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 20:06:01.851082 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 20:06:01.851087 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 20:06:01.851093 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.851098 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 20:06:01.851104 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.851109 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 20:06:01.851114 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.851120 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 20:06:01.851125 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 20:06:01.851133 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 20:06:01.851142 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 20:06:01.851151 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 20:06:01.851165 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 20:06:01.851174 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 20:06:01.851183 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 20:06:01.851192 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 20:06:01.851202 | orchestrator | 2025-05-19 20:06:01.851210 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-19 20:06:01.851220 | orchestrator | Monday 19 May 2025 20:02:50 +0000 (0:00:10.057) 0:05:36.125 ************ 2025-05-19 20:06:01.851229 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.851238 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.851247 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.851256 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.851266 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.851275 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.851285 | orchestrator | 2025-05-19 20:06:01.851294 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-19 20:06:01.851300 | orchestrator | Monday 19 May 2025 20:02:51 +0000 (0:00:00.838) 0:05:36.964 ************ 2025-05-19 20:06:01.851305 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.851311 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.851316 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.851321 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.851327 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.851332 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.851338 | orchestrator | 2025-05-19 20:06:01.851343 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-19 20:06:01.851349 | orchestrator | Monday 19 May 2025 20:02:52 +0000 (0:00:01.075) 0:05:38.039 ************ 2025-05-19 20:06:01.851359 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.851365 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.851370 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.851376 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.851381 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.851386 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.851392 | orchestrator | 2025-05-19 20:06:01.851397 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-19 20:06:01.851403 | orchestrator | Monday 19 May 2025 20:02:55 +0000 (0:00:03.110) 0:05:41.150 ************ 2025-05-19 20:06:01.851417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.851429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.851477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.851487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.851513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851551 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.851556 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.851565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.851581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.851606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851632 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.851638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.851655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.851679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.851707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.851722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.851743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851765 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.851774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.851789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.851804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.851810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.851834 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.851839 | orchestrator | 2025-05-19 20:06:01.851845 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-19 20:06:01.851850 | orchestrator | Monday 19 May 2025 20:02:57 +0000 (0:00:02.056) 0:05:43.206 ************ 2025-05-19 20:06:01.851856 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-19 20:06:01.851861 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-19 20:06:01.851867 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.851872 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-19 20:06:01.851877 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-19 20:06:01.851883 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.851888 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-19 20:06:01.851893 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-19 20:06:01.851898 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.851904 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-19 20:06:01.851909 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-19 20:06:01.851914 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.851920 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-19 20:06:01.851925 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-19 20:06:01.851930 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.851936 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-19 20:06:01.851941 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-19 20:06:01.851946 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.851952 | orchestrator | 2025-05-19 20:06:01.851957 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-19 20:06:01.851962 | orchestrator | Monday 19 May 2025 20:02:58 +0000 (0:00:01.107) 0:05:44.313 ************ 2025-05-19 20:06:01.851977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.851986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.852007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.852043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.852049 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 20:06:01.852069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 20:06:01.852075 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.852117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.852144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.852166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.852192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.852226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-19 20:06:01.852243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 20:06:01.852258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852264 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 20:06:01.852411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 20:06:01.852426 | orchestrator | 2025-05-19 20:06:01.852431 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 20:06:01.852451 | orchestrator | Monday 19 May 2025 20:03:02 +0000 (0:00:03.635) 0:05:47.949 ************ 2025-05-19 20:06:01.852457 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.852463 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.852468 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.852474 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.852479 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.852485 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.852490 | orchestrator | 2025-05-19 20:06:01.852495 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 20:06:01.852501 | orchestrator | Monday 19 May 2025 20:03:03 +0000 (0:00:01.038) 0:05:48.987 ************ 2025-05-19 20:06:01.852506 | orchestrator | 2025-05-19 20:06:01.852512 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 20:06:01.852517 | orchestrator | Monday 19 May 2025 20:03:03 +0000 (0:00:00.112) 0:05:49.100 ************ 2025-05-19 20:06:01.852527 | orchestrator | 2025-05-19 20:06:01.852532 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 20:06:01.852538 | orchestrator | Monday 19 May 2025 20:03:03 +0000 (0:00:00.327) 0:05:49.427 ************ 2025-05-19 20:06:01.852543 | orchestrator | 2025-05-19 20:06:01.852549 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 20:06:01.852554 | orchestrator | Monday 19 May 2025 20:03:03 +0000 (0:00:00.111) 0:05:49.539 ************ 2025-05-19 20:06:01.852559 | orchestrator | 2025-05-19 20:06:01.852565 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 20:06:01.852570 | orchestrator | Monday 19 May 2025 20:03:04 +0000 (0:00:00.335) 0:05:49.874 ************ 2025-05-19 20:06:01.852575 | orchestrator | 2025-05-19 20:06:01.852581 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 20:06:01.852586 | orchestrator | Monday 19 May 2025 20:03:04 +0000 (0:00:00.110) 0:05:49.984 ************ 2025-05-19 20:06:01.852591 | orchestrator | 2025-05-19 20:06:01.852597 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-19 20:06:01.852602 | orchestrator | Monday 19 May 2025 20:03:04 +0000 (0:00:00.335) 0:05:50.320 ************ 2025-05-19 20:06:01.852608 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.852613 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.852618 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.852624 | orchestrator | 2025-05-19 20:06:01.852629 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-19 20:06:01.852635 | orchestrator | Monday 19 May 2025 20:03:17 +0000 (0:00:13.056) 0:06:03.376 ************ 2025-05-19 20:06:01.852640 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.852646 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.852651 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.852656 | orchestrator | 2025-05-19 20:06:01.852662 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-19 20:06:01.852667 | orchestrator | Monday 19 May 2025 20:03:33 +0000 (0:00:16.021) 0:06:19.398 ************ 2025-05-19 20:06:01.852676 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.852681 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.852687 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.852692 | orchestrator | 2025-05-19 20:06:01.852697 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-19 20:06:01.852703 | orchestrator | Monday 19 May 2025 20:03:59 +0000 (0:00:26.259) 0:06:45.657 ************ 2025-05-19 20:06:01.852708 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.852714 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.852719 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.852724 | orchestrator | 2025-05-19 20:06:01.852730 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-19 20:06:01.852735 | orchestrator | Monday 19 May 2025 20:04:28 +0000 (0:00:28.734) 0:07:14.392 ************ 2025-05-19 20:06:01.852741 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.852746 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.852751 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.852757 | orchestrator | 2025-05-19 20:06:01.852762 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-19 20:06:01.852768 | orchestrator | Monday 19 May 2025 20:04:29 +0000 (0:00:00.842) 0:07:15.234 ************ 2025-05-19 20:06:01.852773 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.852778 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.852784 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.852789 | orchestrator | 2025-05-19 20:06:01.852795 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-19 20:06:01.852800 | orchestrator | Monday 19 May 2025 20:04:30 +0000 (0:00:00.989) 0:07:16.224 ************ 2025-05-19 20:06:01.852805 | orchestrator | changed: [testbed-node-3] 2025-05-19 20:06:01.852811 | orchestrator | changed: [testbed-node-4] 2025-05-19 20:06:01.852816 | orchestrator | changed: [testbed-node-5] 2025-05-19 20:06:01.852826 | orchestrator | 2025-05-19 20:06:01.852831 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-19 20:06:01.852837 | orchestrator | Monday 19 May 2025 20:04:53 +0000 (0:00:22.848) 0:07:39.072 ************ 2025-05-19 20:06:01.852842 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.852848 | orchestrator | 2025-05-19 20:06:01.852853 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-19 20:06:01.852859 | orchestrator | Monday 19 May 2025 20:04:53 +0000 (0:00:00.137) 0:07:39.209 ************ 2025-05-19 20:06:01.852864 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.852869 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.852875 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.852883 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.852889 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.852894 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-19 20:06:01.852900 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 20:06:01.852905 | orchestrator | 2025-05-19 20:06:01.852911 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-19 20:06:01.852916 | orchestrator | Monday 19 May 2025 20:05:16 +0000 (0:00:22.989) 0:08:02.199 ************ 2025-05-19 20:06:01.852921 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.852927 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.852932 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.852937 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.852943 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.852948 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.852953 | orchestrator | 2025-05-19 20:06:01.852959 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-19 20:06:01.852964 | orchestrator | Monday 19 May 2025 20:05:26 +0000 (0:00:10.199) 0:08:12.398 ************ 2025-05-19 20:06:01.852970 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.852975 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.852980 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.852986 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.852991 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.852997 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-05-19 20:06:01.853002 | orchestrator | 2025-05-19 20:06:01.853007 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 20:06:01.853013 | orchestrator | Monday 19 May 2025 20:05:30 +0000 (0:00:03.292) 0:08:15.691 ************ 2025-05-19 20:06:01.853018 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 20:06:01.853024 | orchestrator | 2025-05-19 20:06:01.853029 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 20:06:01.853034 | orchestrator | Monday 19 May 2025 20:05:40 +0000 (0:00:10.850) 0:08:26.542 ************ 2025-05-19 20:06:01.853040 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 20:06:01.853045 | orchestrator | 2025-05-19 20:06:01.853050 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-19 20:06:01.853056 | orchestrator | Monday 19 May 2025 20:05:41 +0000 (0:00:01.107) 0:08:27.649 ************ 2025-05-19 20:06:01.853061 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.853067 | orchestrator | 2025-05-19 20:06:01.853072 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-19 20:06:01.853077 | orchestrator | Monday 19 May 2025 20:05:43 +0000 (0:00:01.127) 0:08:28.776 ************ 2025-05-19 20:06:01.853083 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 20:06:01.853088 | orchestrator | 2025-05-19 20:06:01.853093 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-19 20:06:01.853099 | orchestrator | Monday 19 May 2025 20:05:52 +0000 (0:00:09.730) 0:08:38.507 ************ 2025-05-19 20:06:01.853108 | orchestrator | ok: [testbed-node-3] 2025-05-19 20:06:01.853114 | orchestrator | ok: [testbed-node-4] 2025-05-19 20:06:01.853119 | orchestrator | ok: [testbed-node-5] 2025-05-19 20:06:01.853124 | orchestrator | ok: [testbed-node-0] 2025-05-19 20:06:01.853130 | orchestrator | ok: [testbed-node-1] 2025-05-19 20:06:01.853135 | orchestrator | ok: [testbed-node-2] 2025-05-19 20:06:01.853141 | orchestrator | 2025-05-19 20:06:01.853148 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-19 20:06:01.853154 | orchestrator | 2025-05-19 20:06:01.853160 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-19 20:06:01.853165 | orchestrator | Monday 19 May 2025 20:05:54 +0000 (0:00:02.134) 0:08:40.641 ************ 2025-05-19 20:06:01.853170 | orchestrator | changed: [testbed-node-0] 2025-05-19 20:06:01.853176 | orchestrator | changed: [testbed-node-1] 2025-05-19 20:06:01.853181 | orchestrator | changed: [testbed-node-2] 2025-05-19 20:06:01.853187 | orchestrator | 2025-05-19 20:06:01.853192 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-19 20:06:01.853198 | orchestrator | 2025-05-19 20:06:01.853203 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-19 20:06:01.853208 | orchestrator | Monday 19 May 2025 20:05:56 +0000 (0:00:01.063) 0:08:41.704 ************ 2025-05-19 20:06:01.853214 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.853219 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.853225 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.853230 | orchestrator | 2025-05-19 20:06:01.853236 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-19 20:06:01.853241 | orchestrator | 2025-05-19 20:06:01.853246 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-19 20:06:01.853252 | orchestrator | Monday 19 May 2025 20:05:56 +0000 (0:00:00.822) 0:08:42.527 ************ 2025-05-19 20:06:01.853257 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-19 20:06:01.853263 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-19 20:06:01.853268 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-19 20:06:01.853274 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-19 20:06:01.853279 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-19 20:06:01.853285 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-19 20:06:01.853290 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-19 20:06:01.853295 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-19 20:06:01.853301 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-19 20:06:01.853306 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-19 20:06:01.853312 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-19 20:06:01.853317 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-19 20:06:01.853327 | orchestrator | skipping: [testbed-node-3] 2025-05-19 20:06:01.853332 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-19 20:06:01.853338 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-19 20:06:01.853343 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-19 20:06:01.853348 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-19 20:06:01.853354 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-19 20:06:01.853359 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-19 20:06:01.853365 | orchestrator | skipping: [testbed-node-4] 2025-05-19 20:06:01.853370 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-19 20:06:01.853376 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-19 20:06:01.853381 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-19 20:06:01.853390 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-19 20:06:01.853396 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-19 20:06:01.853401 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-19 20:06:01.853406 | orchestrator | skipping: [testbed-node-5] 2025-05-19 20:06:01.853412 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-19 20:06:01.853417 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-19 20:06:01.853422 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-19 20:06:01.853428 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-19 20:06:01.853433 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-19 20:06:01.853486 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.853492 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-19 20:06:01.853497 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.853503 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-19 20:06:01.853508 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-19 20:06:01.853514 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-19 20:06:01.853519 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-19 20:06:01.853524 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-19 20:06:01.853530 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-19 20:06:01.853535 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.853540 | orchestrator | 2025-05-19 20:06:01.853546 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-19 20:06:01.853551 | orchestrator | 2025-05-19 20:06:01.853557 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-19 20:06:01.853562 | orchestrator | Monday 19 May 2025 20:05:58 +0000 (0:00:01.345) 0:08:43.872 ************ 2025-05-19 20:06:01.853568 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-19 20:06:01.853573 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-19 20:06:01.853579 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.853584 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-19 20:06:01.853589 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-19 20:06:01.853595 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.853603 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-19 20:06:01.853609 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-19 20:06:01.853614 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.853620 | orchestrator | 2025-05-19 20:06:01.853625 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-19 20:06:01.853630 | orchestrator | 2025-05-19 20:06:01.853636 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-19 20:06:01.853641 | orchestrator | Monday 19 May 2025 20:05:59 +0000 (0:00:00.843) 0:08:44.715 ************ 2025-05-19 20:06:01.853647 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.853652 | orchestrator | 2025-05-19 20:06:01.853661 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-19 20:06:01.853669 | orchestrator | 2025-05-19 20:06:01.853679 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-19 20:06:01.853688 | orchestrator | Monday 19 May 2025 20:05:59 +0000 (0:00:00.940) 0:08:45.656 ************ 2025-05-19 20:06:01.853696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 20:06:01.853704 | orchestrator | skipping: [testbed-node-1] 2025-05-19 20:06:01.853713 | orchestrator | skipping: [testbed-node-2] 2025-05-19 20:06:01.853722 | orchestrator | 2025-05-19 20:06:01.853728 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 20:06:01.853735 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 20:06:01.853750 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-19 20:06:01.853759 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-19 20:06:01.853766 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-19 20:06:01.853774 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-19 20:06:01.853787 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-19 20:06:01.853795 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-19 20:06:01.853801 | orchestrator | 2025-05-19 20:06:01.853809 | orchestrator | 2025-05-19 20:06:01.853817 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 20:06:01.853824 | orchestrator | Monday 19 May 2025 20:06:00 +0000 (0:00:00.560) 0:08:46.216 ************ 2025-05-19 20:06:01.853832 | orchestrator | =============================================================================== 2025-05-19 20:06:01.853840 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.91s 2025-05-19 20:06:01.853848 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 28.73s 2025-05-19 20:06:01.853855 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.26s 2025-05-19 20:06:01.853863 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.99s 2025-05-19 20:06:01.853871 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.85s 2025-05-19 20:06:01.853879 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.63s 2025-05-19 20:06:01.853887 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.24s 2025-05-19 20:06:01.853894 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.39s 2025-05-19 20:06:01.853899 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.02s 2025-05-19 20:06:01.853903 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.16s 2025-05-19 20:06:01.853908 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.06s 2025-05-19 20:06:01.853913 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.43s 2025-05-19 20:06:01.853920 | orchestrator | nova : Restart nova-api container -------------------------------------- 11.15s 2025-05-19 20:06:01.853928 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.03s 2025-05-19 20:06:01.853936 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.85s 2025-05-19 20:06:01.853944 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.85s 2025-05-19 20:06:01.853951 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.20s 2025-05-19 20:06:01.853958 | orchestrator | nova-cell : Copying files for nova-ssh --------------------------------- 10.06s 2025-05-19 20:06:01.853966 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.73s 2025-05-19 20:06:01.853974 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.70s 2025-05-19 20:06:01.853982 | orchestrator | 2025-05-19 20:06:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:01.853991 | orchestrator | 2025-05-19 20:06:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:04.884355 | orchestrator | 2025-05-19 20:06:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:04.884616 | orchestrator | 2025-05-19 20:06:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:07.932224 | orchestrator | 2025-05-19 20:06:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:07.932337 | orchestrator | 2025-05-19 20:06:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:10.984054 | orchestrator | 2025-05-19 20:06:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:10.984168 | orchestrator | 2025-05-19 20:06:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:14.037976 | orchestrator | 2025-05-19 20:06:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:14.038121 | orchestrator | 2025-05-19 20:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:17.087822 | orchestrator | 2025-05-19 20:06:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:17.087935 | orchestrator | 2025-05-19 20:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:20.130170 | orchestrator | 2025-05-19 20:06:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:20.130303 | orchestrator | 2025-05-19 20:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:23.174738 | orchestrator | 2025-05-19 20:06:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:23.174843 | orchestrator | 2025-05-19 20:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:26.227370 | orchestrator | 2025-05-19 20:06:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:26.227543 | orchestrator | 2025-05-19 20:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:29.263979 | orchestrator | 2025-05-19 20:06:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:29.265089 | orchestrator | 2025-05-19 20:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:32.316736 | orchestrator | 2025-05-19 20:06:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:32.316860 | orchestrator | 2025-05-19 20:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:35.377983 | orchestrator | 2025-05-19 20:06:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:35.378116 | orchestrator | 2025-05-19 20:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:38.432200 | orchestrator | 2025-05-19 20:06:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:38.432313 | orchestrator | 2025-05-19 20:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:41.491672 | orchestrator | 2025-05-19 20:06:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:41.491806 | orchestrator | 2025-05-19 20:06:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:44.545919 | orchestrator | 2025-05-19 20:06:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:44.546006 | orchestrator | 2025-05-19 20:06:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:47.599225 | orchestrator | 2025-05-19 20:06:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:47.599341 | orchestrator | 2025-05-19 20:06:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:50.655650 | orchestrator | 2025-05-19 20:06:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:50.655780 | orchestrator | 2025-05-19 20:06:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:53.759763 | orchestrator | 2025-05-19 20:06:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:53.759911 | orchestrator | 2025-05-19 20:06:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:56.806802 | orchestrator | 2025-05-19 20:06:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:56.806914 | orchestrator | 2025-05-19 20:06:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:06:59.860432 | orchestrator | 2025-05-19 20:06:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:06:59.860543 | orchestrator | 2025-05-19 20:06:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:02.909060 | orchestrator | 2025-05-19 20:07:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:02.909138 | orchestrator | 2025-05-19 20:07:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:05.959242 | orchestrator | 2025-05-19 20:07:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:05.959316 | orchestrator | 2025-05-19 20:07:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:09.004696 | orchestrator | 2025-05-19 20:07:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:09.004794 | orchestrator | 2025-05-19 20:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:12.062181 | orchestrator | 2025-05-19 20:07:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:12.062294 | orchestrator | 2025-05-19 20:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:15.113330 | orchestrator | 2025-05-19 20:07:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:15.113489 | orchestrator | 2025-05-19 20:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:18.161231 | orchestrator | 2025-05-19 20:07:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:18.161396 | orchestrator | 2025-05-19 20:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:21.205804 | orchestrator | 2025-05-19 20:07:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:21.205974 | orchestrator | 2025-05-19 20:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:24.257841 | orchestrator | 2025-05-19 20:07:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:24.257941 | orchestrator | 2025-05-19 20:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:27.308249 | orchestrator | 2025-05-19 20:07:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:27.308415 | orchestrator | 2025-05-19 20:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:30.363969 | orchestrator | 2025-05-19 20:07:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:30.364120 | orchestrator | 2025-05-19 20:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:33.407888 | orchestrator | 2025-05-19 20:07:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:33.407987 | orchestrator | 2025-05-19 20:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:36.458653 | orchestrator | 2025-05-19 20:07:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:36.458772 | orchestrator | 2025-05-19 20:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:39.512793 | orchestrator | 2025-05-19 20:07:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:39.512928 | orchestrator | 2025-05-19 20:07:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:42.559120 | orchestrator | 2025-05-19 20:07:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:42.559290 | orchestrator | 2025-05-19 20:07:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:45.601558 | orchestrator | 2025-05-19 20:07:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:45.601669 | orchestrator | 2025-05-19 20:07:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:48.655643 | orchestrator | 2025-05-19 20:07:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:48.655732 | orchestrator | 2025-05-19 20:07:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:51.707785 | orchestrator | 2025-05-19 20:07:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:51.707911 | orchestrator | 2025-05-19 20:07:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:54.756222 | orchestrator | 2025-05-19 20:07:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:54.756402 | orchestrator | 2025-05-19 20:07:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:07:57.814805 | orchestrator | 2025-05-19 20:07:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:07:57.814911 | orchestrator | 2025-05-19 20:07:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:00.867996 | orchestrator | 2025-05-19 20:08:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:00.868095 | orchestrator | 2025-05-19 20:08:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:03.919506 | orchestrator | 2025-05-19 20:08:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:03.919585 | orchestrator | 2025-05-19 20:08:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:06.978633 | orchestrator | 2025-05-19 20:08:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:06.978721 | orchestrator | 2025-05-19 20:08:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:10.039019 | orchestrator | 2025-05-19 20:08:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:10.039127 | orchestrator | 2025-05-19 20:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:13.086782 | orchestrator | 2025-05-19 20:08:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:13.086890 | orchestrator | 2025-05-19 20:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:16.166457 | orchestrator | 2025-05-19 20:08:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:16.166571 | orchestrator | 2025-05-19 20:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:19.217472 | orchestrator | 2025-05-19 20:08:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:19.217585 | orchestrator | 2025-05-19 20:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:22.262431 | orchestrator | 2025-05-19 20:08:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:22.262554 | orchestrator | 2025-05-19 20:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:25.309269 | orchestrator | 2025-05-19 20:08:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:25.309352 | orchestrator | 2025-05-19 20:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:28.360145 | orchestrator | 2025-05-19 20:08:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:28.360315 | orchestrator | 2025-05-19 20:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:31.414721 | orchestrator | 2025-05-19 20:08:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:31.414835 | orchestrator | 2025-05-19 20:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:34.462812 | orchestrator | 2025-05-19 20:08:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:34.462951 | orchestrator | 2025-05-19 20:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:37.516565 | orchestrator | 2025-05-19 20:08:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:37.516680 | orchestrator | 2025-05-19 20:08:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:40.561962 | orchestrator | 2025-05-19 20:08:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:40.562148 | orchestrator | 2025-05-19 20:08:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:43.609425 | orchestrator | 2025-05-19 20:08:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:43.609544 | orchestrator | 2025-05-19 20:08:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:46.662986 | orchestrator | 2025-05-19 20:08:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:46.663156 | orchestrator | 2025-05-19 20:08:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:49.715117 | orchestrator | 2025-05-19 20:08:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:49.715311 | orchestrator | 2025-05-19 20:08:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:52.767163 | orchestrator | 2025-05-19 20:08:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:52.767304 | orchestrator | 2025-05-19 20:08:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:55.819384 | orchestrator | 2025-05-19 20:08:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:55.819491 | orchestrator | 2025-05-19 20:08:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:08:58.871236 | orchestrator | 2025-05-19 20:08:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:08:58.871329 | orchestrator | 2025-05-19 20:08:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:01.925708 | orchestrator | 2025-05-19 20:09:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:01.925814 | orchestrator | 2025-05-19 20:09:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:04.976859 | orchestrator | 2025-05-19 20:09:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:04.976945 | orchestrator | 2025-05-19 20:09:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:08.029336 | orchestrator | 2025-05-19 20:09:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:08.029534 | orchestrator | 2025-05-19 20:09:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:11.075951 | orchestrator | 2025-05-19 20:09:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:11.076064 | orchestrator | 2025-05-19 20:09:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:14.124752 | orchestrator | 2025-05-19 20:09:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:14.124887 | orchestrator | 2025-05-19 20:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:17.166741 | orchestrator | 2025-05-19 20:09:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:17.166838 | orchestrator | 2025-05-19 20:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:20.219581 | orchestrator | 2025-05-19 20:09:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:20.219674 | orchestrator | 2025-05-19 20:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:23.271038 | orchestrator | 2025-05-19 20:09:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:23.271147 | orchestrator | 2025-05-19 20:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:26.329535 | orchestrator | 2025-05-19 20:09:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:26.329645 | orchestrator | 2025-05-19 20:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:29.378282 | orchestrator | 2025-05-19 20:09:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:29.378382 | orchestrator | 2025-05-19 20:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:32.433482 | orchestrator | 2025-05-19 20:09:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:32.433596 | orchestrator | 2025-05-19 20:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:35.477741 | orchestrator | 2025-05-19 20:09:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:35.477867 | orchestrator | 2025-05-19 20:09:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:38.524296 | orchestrator | 2025-05-19 20:09:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:38.524414 | orchestrator | 2025-05-19 20:09:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:41.574746 | orchestrator | 2025-05-19 20:09:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:41.574855 | orchestrator | 2025-05-19 20:09:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:44.619181 | orchestrator | 2025-05-19 20:09:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:44.619275 | orchestrator | 2025-05-19 20:09:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:47.671042 | orchestrator | 2025-05-19 20:09:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:47.671210 | orchestrator | 2025-05-19 20:09:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:50.721611 | orchestrator | 2025-05-19 20:09:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:50.721844 | orchestrator | 2025-05-19 20:09:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:53.770401 | orchestrator | 2025-05-19 20:09:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:53.770515 | orchestrator | 2025-05-19 20:09:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:56.824404 | orchestrator | 2025-05-19 20:09:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:56.824537 | orchestrator | 2025-05-19 20:09:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:09:59.870992 | orchestrator | 2025-05-19 20:09:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:09:59.871084 | orchestrator | 2025-05-19 20:09:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:02.922433 | orchestrator | 2025-05-19 20:10:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:02.922517 | orchestrator | 2025-05-19 20:10:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:05.971787 | orchestrator | 2025-05-19 20:10:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:05.971863 | orchestrator | 2025-05-19 20:10:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:09.022738 | orchestrator | 2025-05-19 20:10:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:09.022834 | orchestrator | 2025-05-19 20:10:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:12.063740 | orchestrator | 2025-05-19 20:10:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:12.063855 | orchestrator | 2025-05-19 20:10:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:15.099354 | orchestrator | 2025-05-19 20:10:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:15.099453 | orchestrator | 2025-05-19 20:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:18.145802 | orchestrator | 2025-05-19 20:10:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:18.145913 | orchestrator | 2025-05-19 20:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:21.194197 | orchestrator | 2025-05-19 20:10:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:21.194295 | orchestrator | 2025-05-19 20:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:24.239309 | orchestrator | 2025-05-19 20:10:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:24.239433 | orchestrator | 2025-05-19 20:10:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:27.289715 | orchestrator | 2025-05-19 20:10:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:27.289831 | orchestrator | 2025-05-19 20:10:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:30.333718 | orchestrator | 2025-05-19 20:10:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:30.333819 | orchestrator | 2025-05-19 20:10:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:33.384907 | orchestrator | 2025-05-19 20:10:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:33.385046 | orchestrator | 2025-05-19 20:10:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:36.430616 | orchestrator | 2025-05-19 20:10:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:36.430768 | orchestrator | 2025-05-19 20:10:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:39.478361 | orchestrator | 2025-05-19 20:10:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:39.478481 | orchestrator | 2025-05-19 20:10:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:42.526733 | orchestrator | 2025-05-19 20:10:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:42.526848 | orchestrator | 2025-05-19 20:10:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:45.577361 | orchestrator | 2025-05-19 20:10:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:45.577456 | orchestrator | 2025-05-19 20:10:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:48.626368 | orchestrator | 2025-05-19 20:10:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:48.626470 | orchestrator | 2025-05-19 20:10:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:51.672604 | orchestrator | 2025-05-19 20:10:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:51.672770 | orchestrator | 2025-05-19 20:10:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:54.715586 | orchestrator | 2025-05-19 20:10:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:54.715701 | orchestrator | 2025-05-19 20:10:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:10:57.768900 | orchestrator | 2025-05-19 20:10:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:10:57.769008 | orchestrator | 2025-05-19 20:10:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:00.820250 | orchestrator | 2025-05-19 20:11:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:00.820350 | orchestrator | 2025-05-19 20:11:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:03.869045 | orchestrator | 2025-05-19 20:11:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:03.869249 | orchestrator | 2025-05-19 20:11:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:06.921854 | orchestrator | 2025-05-19 20:11:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:06.921970 | orchestrator | 2025-05-19 20:11:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:09.969470 | orchestrator | 2025-05-19 20:11:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:09.969583 | orchestrator | 2025-05-19 20:11:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:13.021853 | orchestrator | 2025-05-19 20:11:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:13.021952 | orchestrator | 2025-05-19 20:11:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:16.070298 | orchestrator | 2025-05-19 20:11:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:16.070434 | orchestrator | 2025-05-19 20:11:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:19.114457 | orchestrator | 2025-05-19 20:11:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:19.114537 | orchestrator | 2025-05-19 20:11:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:22.168307 | orchestrator | 2025-05-19 20:11:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:22.170229 | orchestrator | 2025-05-19 20:11:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:25.209405 | orchestrator | 2025-05-19 20:11:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:25.209515 | orchestrator | 2025-05-19 20:11:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:28.252576 | orchestrator | 2025-05-19 20:11:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:28.252662 | orchestrator | 2025-05-19 20:11:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:31.301955 | orchestrator | 2025-05-19 20:11:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:31.302200 | orchestrator | 2025-05-19 20:11:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:34.352615 | orchestrator | 2025-05-19 20:11:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:34.352799 | orchestrator | 2025-05-19 20:11:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:37.403990 | orchestrator | 2025-05-19 20:11:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:37.404097 | orchestrator | 2025-05-19 20:11:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:40.455773 | orchestrator | 2025-05-19 20:11:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:40.455893 | orchestrator | 2025-05-19 20:11:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:43.510216 | orchestrator | 2025-05-19 20:11:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:43.510339 | orchestrator | 2025-05-19 20:11:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:46.554585 | orchestrator | 2025-05-19 20:11:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:46.554685 | orchestrator | 2025-05-19 20:11:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:49.599708 | orchestrator | 2025-05-19 20:11:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:49.600803 | orchestrator | 2025-05-19 20:11:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:52.646795 | orchestrator | 2025-05-19 20:11:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:52.646901 | orchestrator | 2025-05-19 20:11:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:55.707508 | orchestrator | 2025-05-19 20:11:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:55.707625 | orchestrator | 2025-05-19 20:11:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:11:58.761722 | orchestrator | 2025-05-19 20:11:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:11:58.761828 | orchestrator | 2025-05-19 20:11:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:01.819857 | orchestrator | 2025-05-19 20:12:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:01.819976 | orchestrator | 2025-05-19 20:12:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:04.863810 | orchestrator | 2025-05-19 20:12:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:04.863938 | orchestrator | 2025-05-19 20:12:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:07.915854 | orchestrator | 2025-05-19 20:12:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:07.916032 | orchestrator | 2025-05-19 20:12:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:10.964074 | orchestrator | 2025-05-19 20:12:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:10.964202 | orchestrator | 2025-05-19 20:12:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:14.009934 | orchestrator | 2025-05-19 20:12:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:14.010175 | orchestrator | 2025-05-19 20:12:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:17.059452 | orchestrator | 2025-05-19 20:12:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:17.059546 | orchestrator | 2025-05-19 20:12:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:20.105500 | orchestrator | 2025-05-19 20:12:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:20.105612 | orchestrator | 2025-05-19 20:12:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:23.146891 | orchestrator | 2025-05-19 20:12:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:23.146994 | orchestrator | 2025-05-19 20:12:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:26.187691 | orchestrator | 2025-05-19 20:12:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:26.187785 | orchestrator | 2025-05-19 20:12:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:29.238957 | orchestrator | 2025-05-19 20:12:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:29.239141 | orchestrator | 2025-05-19 20:12:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:32.291941 | orchestrator | 2025-05-19 20:12:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:32.292039 | orchestrator | 2025-05-19 20:12:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:35.338438 | orchestrator | 2025-05-19 20:12:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:35.338547 | orchestrator | 2025-05-19 20:12:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:38.388841 | orchestrator | 2025-05-19 20:12:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:38.388944 | orchestrator | 2025-05-19 20:12:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:41.440671 | orchestrator | 2025-05-19 20:12:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:41.440750 | orchestrator | 2025-05-19 20:12:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:44.480822 | orchestrator | 2025-05-19 20:12:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:44.480930 | orchestrator | 2025-05-19 20:12:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:47.525302 | orchestrator | 2025-05-19 20:12:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:47.525433 | orchestrator | 2025-05-19 20:12:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:50.575938 | orchestrator | 2025-05-19 20:12:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:50.576060 | orchestrator | 2025-05-19 20:12:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:53.623263 | orchestrator | 2025-05-19 20:12:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:53.623438 | orchestrator | 2025-05-19 20:12:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:56.671302 | orchestrator | 2025-05-19 20:12:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:56.671425 | orchestrator | 2025-05-19 20:12:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:12:59.723644 | orchestrator | 2025-05-19 20:12:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:12:59.723749 | orchestrator | 2025-05-19 20:12:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:02.776433 | orchestrator | 2025-05-19 20:13:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:02.776572 | orchestrator | 2025-05-19 20:13:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:05.821945 | orchestrator | 2025-05-19 20:13:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:05.822220 | orchestrator | 2025-05-19 20:13:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:08.871035 | orchestrator | 2025-05-19 20:13:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:08.871187 | orchestrator | 2025-05-19 20:13:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:11.925398 | orchestrator | 2025-05-19 20:13:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:11.925533 | orchestrator | 2025-05-19 20:13:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:14.972800 | orchestrator | 2025-05-19 20:13:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:14.972960 | orchestrator | 2025-05-19 20:13:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:18.024833 | orchestrator | 2025-05-19 20:13:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:18.024947 | orchestrator | 2025-05-19 20:13:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:21.078851 | orchestrator | 2025-05-19 20:13:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:21.078947 | orchestrator | 2025-05-19 20:13:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:24.120774 | orchestrator | 2025-05-19 20:13:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:24.120851 | orchestrator | 2025-05-19 20:13:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:27.168224 | orchestrator | 2025-05-19 20:13:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:27.168319 | orchestrator | 2025-05-19 20:13:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:30.212988 | orchestrator | 2025-05-19 20:13:30 | INFO  | Task fd1a0f33-9b74-42e4-8229-72aea20b9681 is in state STARTED 2025-05-19 20:13:30.213951 | orchestrator | 2025-05-19 20:13:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:30.214150 | orchestrator | 2025-05-19 20:13:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:33.268539 | orchestrator | 2025-05-19 20:13:33 | INFO  | Task fd1a0f33-9b74-42e4-8229-72aea20b9681 is in state STARTED 2025-05-19 20:13:33.269475 | orchestrator | 2025-05-19 20:13:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:33.269501 | orchestrator | 2025-05-19 20:13:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:36.324603 | orchestrator | 2025-05-19 20:13:36 | INFO  | Task fd1a0f33-9b74-42e4-8229-72aea20b9681 is in state STARTED 2025-05-19 20:13:36.325905 | orchestrator | 2025-05-19 20:13:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:36.325957 | orchestrator | 2025-05-19 20:13:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:39.383346 | orchestrator | 2025-05-19 20:13:39 | INFO  | Task fd1a0f33-9b74-42e4-8229-72aea20b9681 is in state STARTED 2025-05-19 20:13:39.384503 | orchestrator | 2025-05-19 20:13:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:39.384553 | orchestrator | 2025-05-19 20:13:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:42.430001 | orchestrator | 2025-05-19 20:13:42 | INFO  | Task fd1a0f33-9b74-42e4-8229-72aea20b9681 is in state SUCCESS 2025-05-19 20:13:42.431015 | orchestrator | 2025-05-19 20:13:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:42.431093 | orchestrator | 2025-05-19 20:13:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:45.482316 | orchestrator | 2025-05-19 20:13:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:45.482444 | orchestrator | 2025-05-19 20:13:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:48.535525 | orchestrator | 2025-05-19 20:13:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:48.535664 | orchestrator | 2025-05-19 20:13:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:51.584143 | orchestrator | 2025-05-19 20:13:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:51.584263 | orchestrator | 2025-05-19 20:13:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:54.635585 | orchestrator | 2025-05-19 20:13:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:54.635699 | orchestrator | 2025-05-19 20:13:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:13:57.682909 | orchestrator | 2025-05-19 20:13:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:13:57.683025 | orchestrator | 2025-05-19 20:13:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:00.729630 | orchestrator | 2025-05-19 20:14:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:00.729735 | orchestrator | 2025-05-19 20:14:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:03.777029 | orchestrator | 2025-05-19 20:14:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:03.777219 | orchestrator | 2025-05-19 20:14:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:06.828433 | orchestrator | 2025-05-19 20:14:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:06.828541 | orchestrator | 2025-05-19 20:14:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:09.879427 | orchestrator | 2025-05-19 20:14:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:09.879547 | orchestrator | 2025-05-19 20:14:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:12.926618 | orchestrator | 2025-05-19 20:14:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:12.926747 | orchestrator | 2025-05-19 20:14:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:15.978551 | orchestrator | 2025-05-19 20:14:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:15.978693 | orchestrator | 2025-05-19 20:14:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:19.026416 | orchestrator | 2025-05-19 20:14:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:19.026539 | orchestrator | 2025-05-19 20:14:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:22.079604 | orchestrator | 2025-05-19 20:14:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:22.079715 | orchestrator | 2025-05-19 20:14:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:25.117518 | orchestrator | 2025-05-19 20:14:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:25.117596 | orchestrator | 2025-05-19 20:14:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:28.164274 | orchestrator | 2025-05-19 20:14:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:28.164382 | orchestrator | 2025-05-19 20:14:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:31.215898 | orchestrator | 2025-05-19 20:14:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:31.216009 | orchestrator | 2025-05-19 20:14:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:34.249890 | orchestrator | 2025-05-19 20:14:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:34.250007 | orchestrator | 2025-05-19 20:14:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:37.297161 | orchestrator | 2025-05-19 20:14:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:37.297272 | orchestrator | 2025-05-19 20:14:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:40.338473 | orchestrator | 2025-05-19 20:14:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:40.338568 | orchestrator | 2025-05-19 20:14:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:43.388942 | orchestrator | 2025-05-19 20:14:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:43.389151 | orchestrator | 2025-05-19 20:14:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:46.439995 | orchestrator | 2025-05-19 20:14:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:46.440140 | orchestrator | 2025-05-19 20:14:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:49.489122 | orchestrator | 2025-05-19 20:14:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:49.489230 | orchestrator | 2025-05-19 20:14:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:52.542835 | orchestrator | 2025-05-19 20:14:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:52.542984 | orchestrator | 2025-05-19 20:14:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:55.603211 | orchestrator | 2025-05-19 20:14:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:55.603358 | orchestrator | 2025-05-19 20:14:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:14:58.648198 | orchestrator | 2025-05-19 20:14:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:14:58.648358 | orchestrator | 2025-05-19 20:14:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:01.697978 | orchestrator | 2025-05-19 20:15:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:01.698182 | orchestrator | 2025-05-19 20:15:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:04.750729 | orchestrator | 2025-05-19 20:15:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:04.750848 | orchestrator | 2025-05-19 20:15:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:07.805356 | orchestrator | 2025-05-19 20:15:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:07.805453 | orchestrator | 2025-05-19 20:15:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:10.854634 | orchestrator | 2025-05-19 20:15:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:10.854779 | orchestrator | 2025-05-19 20:15:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:13.901869 | orchestrator | 2025-05-19 20:15:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:13.901966 | orchestrator | 2025-05-19 20:15:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:16.951164 | orchestrator | 2025-05-19 20:15:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:16.951297 | orchestrator | 2025-05-19 20:15:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:20.008603 | orchestrator | 2025-05-19 20:15:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:20.008742 | orchestrator | 2025-05-19 20:15:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:23.061850 | orchestrator | 2025-05-19 20:15:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:23.061977 | orchestrator | 2025-05-19 20:15:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:26.103110 | orchestrator | 2025-05-19 20:15:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:26.103242 | orchestrator | 2025-05-19 20:15:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:29.152853 | orchestrator | 2025-05-19 20:15:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:29.152964 | orchestrator | 2025-05-19 20:15:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:32.198618 | orchestrator | 2025-05-19 20:15:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:32.198708 | orchestrator | 2025-05-19 20:15:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:35.245474 | orchestrator | 2025-05-19 20:15:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:35.245606 | orchestrator | 2025-05-19 20:15:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:38.295749 | orchestrator | 2025-05-19 20:15:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:38.295863 | orchestrator | 2025-05-19 20:15:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:41.339595 | orchestrator | 2025-05-19 20:15:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:41.339716 | orchestrator | 2025-05-19 20:15:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:44.386472 | orchestrator | 2025-05-19 20:15:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:44.386560 | orchestrator | 2025-05-19 20:15:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:47.431585 | orchestrator | 2025-05-19 20:15:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:47.431765 | orchestrator | 2025-05-19 20:15:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:50.482428 | orchestrator | 2025-05-19 20:15:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:50.482588 | orchestrator | 2025-05-19 20:15:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:53.539386 | orchestrator | 2025-05-19 20:15:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:53.539529 | orchestrator | 2025-05-19 20:15:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:56.583775 | orchestrator | 2025-05-19 20:15:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:56.583877 | orchestrator | 2025-05-19 20:15:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:15:59.631871 | orchestrator | 2025-05-19 20:15:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:15:59.632122 | orchestrator | 2025-05-19 20:15:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:02.680373 | orchestrator | 2025-05-19 20:16:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:02.680463 | orchestrator | 2025-05-19 20:16:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:05.730870 | orchestrator | 2025-05-19 20:16:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:05.731069 | orchestrator | 2025-05-19 20:16:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:08.784627 | orchestrator | 2025-05-19 20:16:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:08.784745 | orchestrator | 2025-05-19 20:16:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:11.832073 | orchestrator | 2025-05-19 20:16:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:11.832181 | orchestrator | 2025-05-19 20:16:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:14.882609 | orchestrator | 2025-05-19 20:16:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:14.882691 | orchestrator | 2025-05-19 20:16:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:17.934262 | orchestrator | 2025-05-19 20:16:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:17.934406 | orchestrator | 2025-05-19 20:16:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:20.985525 | orchestrator | 2025-05-19 20:16:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:20.985629 | orchestrator | 2025-05-19 20:16:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:24.053570 | orchestrator | 2025-05-19 20:16:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:24.053675 | orchestrator | 2025-05-19 20:16:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:27.105128 | orchestrator | 2025-05-19 20:16:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:27.105215 | orchestrator | 2025-05-19 20:16:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:30.153324 | orchestrator | 2025-05-19 20:16:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:30.153448 | orchestrator | 2025-05-19 20:16:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:33.200230 | orchestrator | 2025-05-19 20:16:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:33.200412 | orchestrator | 2025-05-19 20:16:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:36.244584 | orchestrator | 2025-05-19 20:16:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:36.244688 | orchestrator | 2025-05-19 20:16:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:39.298090 | orchestrator | 2025-05-19 20:16:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:39.298190 | orchestrator | 2025-05-19 20:16:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:42.344635 | orchestrator | 2025-05-19 20:16:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:42.344774 | orchestrator | 2025-05-19 20:16:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:45.396066 | orchestrator | 2025-05-19 20:16:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:45.396160 | orchestrator | 2025-05-19 20:16:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:48.445366 | orchestrator | 2025-05-19 20:16:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:48.445485 | orchestrator | 2025-05-19 20:16:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:51.505328 | orchestrator | 2025-05-19 20:16:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:51.505416 | orchestrator | 2025-05-19 20:16:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:54.551585 | orchestrator | 2025-05-19 20:16:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:54.551691 | orchestrator | 2025-05-19 20:16:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:16:57.608628 | orchestrator | 2025-05-19 20:16:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:16:57.608738 | orchestrator | 2025-05-19 20:16:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:00.660706 | orchestrator | 2025-05-19 20:17:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:00.660817 | orchestrator | 2025-05-19 20:17:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:03.710291 | orchestrator | 2025-05-19 20:17:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:03.710442 | orchestrator | 2025-05-19 20:17:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:06.758248 | orchestrator | 2025-05-19 20:17:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:06.758375 | orchestrator | 2025-05-19 20:17:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:09.809722 | orchestrator | 2025-05-19 20:17:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:09.809850 | orchestrator | 2025-05-19 20:17:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:12.861056 | orchestrator | 2025-05-19 20:17:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:12.861201 | orchestrator | 2025-05-19 20:17:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:15.908094 | orchestrator | 2025-05-19 20:17:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:15.908198 | orchestrator | 2025-05-19 20:17:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:18.959084 | orchestrator | 2025-05-19 20:17:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:18.959262 | orchestrator | 2025-05-19 20:17:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:22.009607 | orchestrator | 2025-05-19 20:17:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:22.009694 | orchestrator | 2025-05-19 20:17:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:25.064384 | orchestrator | 2025-05-19 20:17:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:25.064489 | orchestrator | 2025-05-19 20:17:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:28.115352 | orchestrator | 2025-05-19 20:17:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:28.115473 | orchestrator | 2025-05-19 20:17:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:31.168411 | orchestrator | 2025-05-19 20:17:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:31.168574 | orchestrator | 2025-05-19 20:17:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:34.214355 | orchestrator | 2025-05-19 20:17:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:34.214499 | orchestrator | 2025-05-19 20:17:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:37.266181 | orchestrator | 2025-05-19 20:17:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:37.266981 | orchestrator | 2025-05-19 20:17:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:40.318517 | orchestrator | 2025-05-19 20:17:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:40.318601 | orchestrator | 2025-05-19 20:17:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:43.364416 | orchestrator | 2025-05-19 20:17:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:43.364533 | orchestrator | 2025-05-19 20:17:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:46.417219 | orchestrator | 2025-05-19 20:17:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:46.417325 | orchestrator | 2025-05-19 20:17:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:49.458606 | orchestrator | 2025-05-19 20:17:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:49.458739 | orchestrator | 2025-05-19 20:17:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:52.501759 | orchestrator | 2025-05-19 20:17:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:52.501873 | orchestrator | 2025-05-19 20:17:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:55.557005 | orchestrator | 2025-05-19 20:17:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:55.557144 | orchestrator | 2025-05-19 20:17:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:17:58.607725 | orchestrator | 2025-05-19 20:17:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:17:58.607833 | orchestrator | 2025-05-19 20:17:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:01.658070 | orchestrator | 2025-05-19 20:18:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:01.658262 | orchestrator | 2025-05-19 20:18:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:04.705703 | orchestrator | 2025-05-19 20:18:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:04.705830 | orchestrator | 2025-05-19 20:18:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:07.763237 | orchestrator | 2025-05-19 20:18:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:07.763382 | orchestrator | 2025-05-19 20:18:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:10.814868 | orchestrator | 2025-05-19 20:18:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:10.814972 | orchestrator | 2025-05-19 20:18:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:13.865944 | orchestrator | 2025-05-19 20:18:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:13.866280 | orchestrator | 2025-05-19 20:18:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:16.921125 | orchestrator | 2025-05-19 20:18:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:16.921219 | orchestrator | 2025-05-19 20:18:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:19.974344 | orchestrator | 2025-05-19 20:18:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:19.974463 | orchestrator | 2025-05-19 20:18:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:23.050358 | orchestrator | 2025-05-19 20:18:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:23.050462 | orchestrator | 2025-05-19 20:18:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:26.090624 | orchestrator | 2025-05-19 20:18:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:26.090722 | orchestrator | 2025-05-19 20:18:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:29.140557 | orchestrator | 2025-05-19 20:18:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:29.140682 | orchestrator | 2025-05-19 20:18:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:32.191035 | orchestrator | 2025-05-19 20:18:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:32.191217 | orchestrator | 2025-05-19 20:18:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:35.238176 | orchestrator | 2025-05-19 20:18:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:35.238313 | orchestrator | 2025-05-19 20:18:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:38.295526 | orchestrator | 2025-05-19 20:18:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:38.295628 | orchestrator | 2025-05-19 20:18:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:41.344919 | orchestrator | 2025-05-19 20:18:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:41.345145 | orchestrator | 2025-05-19 20:18:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:44.391522 | orchestrator | 2025-05-19 20:18:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:44.391624 | orchestrator | 2025-05-19 20:18:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:47.441523 | orchestrator | 2025-05-19 20:18:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:47.441636 | orchestrator | 2025-05-19 20:18:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:50.493218 | orchestrator | 2025-05-19 20:18:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:50.493353 | orchestrator | 2025-05-19 20:18:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:53.548544 | orchestrator | 2025-05-19 20:18:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:53.548685 | orchestrator | 2025-05-19 20:18:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:56.600967 | orchestrator | 2025-05-19 20:18:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:56.601143 | orchestrator | 2025-05-19 20:18:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:18:59.651369 | orchestrator | 2025-05-19 20:18:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:18:59.651496 | orchestrator | 2025-05-19 20:18:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:02.698742 | orchestrator | 2025-05-19 20:19:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:02.698844 | orchestrator | 2025-05-19 20:19:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:05.750755 | orchestrator | 2025-05-19 20:19:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:05.750889 | orchestrator | 2025-05-19 20:19:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:08.801482 | orchestrator | 2025-05-19 20:19:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:08.801588 | orchestrator | 2025-05-19 20:19:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:11.849882 | orchestrator | 2025-05-19 20:19:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:11.849992 | orchestrator | 2025-05-19 20:19:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:14.901773 | orchestrator | 2025-05-19 20:19:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:14.901856 | orchestrator | 2025-05-19 20:19:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:17.954233 | orchestrator | 2025-05-19 20:19:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:17.954360 | orchestrator | 2025-05-19 20:19:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:20.993561 | orchestrator | 2025-05-19 20:19:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:20.993681 | orchestrator | 2025-05-19 20:19:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:24.047677 | orchestrator | 2025-05-19 20:19:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:24.047789 | orchestrator | 2025-05-19 20:19:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:27.096908 | orchestrator | 2025-05-19 20:19:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:27.097031 | orchestrator | 2025-05-19 20:19:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:30.146808 | orchestrator | 2025-05-19 20:19:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:30.146894 | orchestrator | 2025-05-19 20:19:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:33.201215 | orchestrator | 2025-05-19 20:19:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:33.201342 | orchestrator | 2025-05-19 20:19:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:36.247618 | orchestrator | 2025-05-19 20:19:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:36.247819 | orchestrator | 2025-05-19 20:19:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:39.294905 | orchestrator | 2025-05-19 20:19:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:39.295017 | orchestrator | 2025-05-19 20:19:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:42.344953 | orchestrator | 2025-05-19 20:19:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:42.345124 | orchestrator | 2025-05-19 20:19:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:45.397262 | orchestrator | 2025-05-19 20:19:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:45.397352 | orchestrator | 2025-05-19 20:19:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:48.450246 | orchestrator | 2025-05-19 20:19:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:48.450409 | orchestrator | 2025-05-19 20:19:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:51.500128 | orchestrator | 2025-05-19 20:19:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:51.500230 | orchestrator | 2025-05-19 20:19:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:54.551362 | orchestrator | 2025-05-19 20:19:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:54.551470 | orchestrator | 2025-05-19 20:19:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:19:57.606148 | orchestrator | 2025-05-19 20:19:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:19:57.606338 | orchestrator | 2025-05-19 20:19:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:00.652849 | orchestrator | 2025-05-19 20:20:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:00.652942 | orchestrator | 2025-05-19 20:20:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:03.697721 | orchestrator | 2025-05-19 20:20:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:03.697852 | orchestrator | 2025-05-19 20:20:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:06.743775 | orchestrator | 2025-05-19 20:20:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:06.743913 | orchestrator | 2025-05-19 20:20:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:09.795590 | orchestrator | 2025-05-19 20:20:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:09.795711 | orchestrator | 2025-05-19 20:20:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:12.845199 | orchestrator | 2025-05-19 20:20:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:12.845294 | orchestrator | 2025-05-19 20:20:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:15.895015 | orchestrator | 2025-05-19 20:20:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:15.895159 | orchestrator | 2025-05-19 20:20:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:18.949610 | orchestrator | 2025-05-19 20:20:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:18.949715 | orchestrator | 2025-05-19 20:20:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:21.991626 | orchestrator | 2025-05-19 20:20:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:21.991741 | orchestrator | 2025-05-19 20:20:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:25.049006 | orchestrator | 2025-05-19 20:20:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:25.049132 | orchestrator | 2025-05-19 20:20:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:28.089441 | orchestrator | 2025-05-19 20:20:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:28.089533 | orchestrator | 2025-05-19 20:20:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:31.140026 | orchestrator | 2025-05-19 20:20:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:31.140196 | orchestrator | 2025-05-19 20:20:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:34.185462 | orchestrator | 2025-05-19 20:20:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:34.185560 | orchestrator | 2025-05-19 20:20:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:37.230644 | orchestrator | 2025-05-19 20:20:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:37.230764 | orchestrator | 2025-05-19 20:20:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:40.282237 | orchestrator | 2025-05-19 20:20:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:40.282375 | orchestrator | 2025-05-19 20:20:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:43.333668 | orchestrator | 2025-05-19 20:20:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:43.333749 | orchestrator | 2025-05-19 20:20:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:46.387999 | orchestrator | 2025-05-19 20:20:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:46.388173 | orchestrator | 2025-05-19 20:20:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:49.437211 | orchestrator | 2025-05-19 20:20:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:49.437323 | orchestrator | 2025-05-19 20:20:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:52.487061 | orchestrator | 2025-05-19 20:20:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:52.487214 | orchestrator | 2025-05-19 20:20:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:55.537657 | orchestrator | 2025-05-19 20:20:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:55.537756 | orchestrator | 2025-05-19 20:20:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:20:58.641455 | orchestrator | 2025-05-19 20:20:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:20:58.641592 | orchestrator | 2025-05-19 20:20:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:01.686441 | orchestrator | 2025-05-19 20:21:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:01.686547 | orchestrator | 2025-05-19 20:21:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:04.774998 | orchestrator | 2025-05-19 20:21:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:04.775212 | orchestrator | 2025-05-19 20:21:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:07.836985 | orchestrator | 2025-05-19 20:21:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:07.837124 | orchestrator | 2025-05-19 20:21:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:10.897583 | orchestrator | 2025-05-19 20:21:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:10.897723 | orchestrator | 2025-05-19 20:21:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:13.954441 | orchestrator | 2025-05-19 20:21:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:13.954565 | orchestrator | 2025-05-19 20:21:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:17.003281 | orchestrator | 2025-05-19 20:21:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:17.003377 | orchestrator | 2025-05-19 20:21:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:20.057392 | orchestrator | 2025-05-19 20:21:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:20.057509 | orchestrator | 2025-05-19 20:21:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:23.109091 | orchestrator | 2025-05-19 20:21:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:23.109186 | orchestrator | 2025-05-19 20:21:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:26.143958 | orchestrator | 2025-05-19 20:21:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:26.144067 | orchestrator | 2025-05-19 20:21:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:29.195821 | orchestrator | 2025-05-19 20:21:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:29.195961 | orchestrator | 2025-05-19 20:21:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:32.250700 | orchestrator | 2025-05-19 20:21:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:32.250813 | orchestrator | 2025-05-19 20:21:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:35.306729 | orchestrator | 2025-05-19 20:21:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:35.306834 | orchestrator | 2025-05-19 20:21:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:38.362651 | orchestrator | 2025-05-19 20:21:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:38.362764 | orchestrator | 2025-05-19 20:21:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:41.409022 | orchestrator | 2025-05-19 20:21:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:41.409142 | orchestrator | 2025-05-19 20:21:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:44.463073 | orchestrator | 2025-05-19 20:21:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:44.463200 | orchestrator | 2025-05-19 20:21:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:47.513492 | orchestrator | 2025-05-19 20:21:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:47.513603 | orchestrator | 2025-05-19 20:21:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:50.570493 | orchestrator | 2025-05-19 20:21:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:50.570631 | orchestrator | 2025-05-19 20:21:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:53.624040 | orchestrator | 2025-05-19 20:21:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:53.624145 | orchestrator | 2025-05-19 20:21:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:56.679398 | orchestrator | 2025-05-19 20:21:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:56.679491 | orchestrator | 2025-05-19 20:21:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:21:59.728778 | orchestrator | 2025-05-19 20:21:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:21:59.728910 | orchestrator | 2025-05-19 20:21:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:02.773398 | orchestrator | 2025-05-19 20:22:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:02.773514 | orchestrator | 2025-05-19 20:22:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:05.826939 | orchestrator | 2025-05-19 20:22:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:05.827059 | orchestrator | 2025-05-19 20:22:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:08.876470 | orchestrator | 2025-05-19 20:22:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:08.876578 | orchestrator | 2025-05-19 20:22:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:11.929262 | orchestrator | 2025-05-19 20:22:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:11.929461 | orchestrator | 2025-05-19 20:22:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:14.974308 | orchestrator | 2025-05-19 20:22:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:14.974500 | orchestrator | 2025-05-19 20:22:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:18.020415 | orchestrator | 2025-05-19 20:22:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:18.020530 | orchestrator | 2025-05-19 20:22:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:21.072597 | orchestrator | 2025-05-19 20:22:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:21.072689 | orchestrator | 2025-05-19 20:22:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:24.112624 | orchestrator | 2025-05-19 20:22:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:24.112879 | orchestrator | 2025-05-19 20:22:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:27.158928 | orchestrator | 2025-05-19 20:22:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:27.159058 | orchestrator | 2025-05-19 20:22:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:30.212252 | orchestrator | 2025-05-19 20:22:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:30.212571 | orchestrator | 2025-05-19 20:22:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:33.257126 | orchestrator | 2025-05-19 20:22:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:33.257229 | orchestrator | 2025-05-19 20:22:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:36.305799 | orchestrator | 2025-05-19 20:22:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:36.305912 | orchestrator | 2025-05-19 20:22:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:39.356590 | orchestrator | 2025-05-19 20:22:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:39.356693 | orchestrator | 2025-05-19 20:22:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:42.400711 | orchestrator | 2025-05-19 20:22:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:42.400831 | orchestrator | 2025-05-19 20:22:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:45.449113 | orchestrator | 2025-05-19 20:22:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:45.449207 | orchestrator | 2025-05-19 20:22:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:48.499604 | orchestrator | 2025-05-19 20:22:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:48.499714 | orchestrator | 2025-05-19 20:22:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:51.546533 | orchestrator | 2025-05-19 20:22:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:51.546653 | orchestrator | 2025-05-19 20:22:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:54.597124 | orchestrator | 2025-05-19 20:22:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:54.597232 | orchestrator | 2025-05-19 20:22:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:22:57.649782 | orchestrator | 2025-05-19 20:22:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:22:57.649909 | orchestrator | 2025-05-19 20:22:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:00.697428 | orchestrator | 2025-05-19 20:23:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:00.697636 | orchestrator | 2025-05-19 20:23:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:03.749909 | orchestrator | 2025-05-19 20:23:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:03.750100 | orchestrator | 2025-05-19 20:23:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:06.800753 | orchestrator | 2025-05-19 20:23:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:06.800860 | orchestrator | 2025-05-19 20:23:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:09.850538 | orchestrator | 2025-05-19 20:23:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:09.850634 | orchestrator | 2025-05-19 20:23:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:12.902313 | orchestrator | 2025-05-19 20:23:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:12.902409 | orchestrator | 2025-05-19 20:23:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:15.944863 | orchestrator | 2025-05-19 20:23:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:15.944961 | orchestrator | 2025-05-19 20:23:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:18.986714 | orchestrator | 2025-05-19 20:23:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:18.986824 | orchestrator | 2025-05-19 20:23:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:22.038949 | orchestrator | 2025-05-19 20:23:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:22.039028 | orchestrator | 2025-05-19 20:23:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:25.105744 | orchestrator | 2025-05-19 20:23:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:25.105844 | orchestrator | 2025-05-19 20:23:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:28.146201 | orchestrator | 2025-05-19 20:23:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:28.146310 | orchestrator | 2025-05-19 20:23:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:31.198356 | orchestrator | 2025-05-19 20:23:31 | INFO  | Task e62a24da-4d45-481c-b203-4fd9367fdc88 is in state STARTED 2025-05-19 20:23:31.199119 | orchestrator | 2025-05-19 20:23:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:31.199220 | orchestrator | 2025-05-19 20:23:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:34.253638 | orchestrator | 2025-05-19 20:23:34 | INFO  | Task e62a24da-4d45-481c-b203-4fd9367fdc88 is in state STARTED 2025-05-19 20:23:34.254741 | orchestrator | 2025-05-19 20:23:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:34.255102 | orchestrator | 2025-05-19 20:23:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:37.321558 | orchestrator | 2025-05-19 20:23:37 | INFO  | Task e62a24da-4d45-481c-b203-4fd9367fdc88 is in state STARTED 2025-05-19 20:23:37.323350 | orchestrator | 2025-05-19 20:23:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:37.323401 | orchestrator | 2025-05-19 20:23:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:40.371266 | orchestrator | 2025-05-19 20:23:40 | INFO  | Task e62a24da-4d45-481c-b203-4fd9367fdc88 is in state SUCCESS 2025-05-19 20:23:40.372202 | orchestrator | 2025-05-19 20:23:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:40.372315 | orchestrator | 2025-05-19 20:23:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:43.419927 | orchestrator | 2025-05-19 20:23:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:43.420021 | orchestrator | 2025-05-19 20:23:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:46.472217 | orchestrator | 2025-05-19 20:23:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:46.472350 | orchestrator | 2025-05-19 20:23:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:49.523844 | orchestrator | 2025-05-19 20:23:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:49.523960 | orchestrator | 2025-05-19 20:23:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:52.573377 | orchestrator | 2025-05-19 20:23:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:52.573569 | orchestrator | 2025-05-19 20:23:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:55.617911 | orchestrator | 2025-05-19 20:23:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:55.618076 | orchestrator | 2025-05-19 20:23:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:23:58.667662 | orchestrator | 2025-05-19 20:23:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:23:58.667850 | orchestrator | 2025-05-19 20:23:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:01.711986 | orchestrator | 2025-05-19 20:24:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:01.712093 | orchestrator | 2025-05-19 20:24:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:04.758958 | orchestrator | 2025-05-19 20:24:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:04.759053 | orchestrator | 2025-05-19 20:24:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:07.809839 | orchestrator | 2025-05-19 20:24:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:07.809952 | orchestrator | 2025-05-19 20:24:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:10.858588 | orchestrator | 2025-05-19 20:24:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:10.858798 | orchestrator | 2025-05-19 20:24:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:13.909605 | orchestrator | 2025-05-19 20:24:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:13.909700 | orchestrator | 2025-05-19 20:24:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:16.955136 | orchestrator | 2025-05-19 20:24:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:16.955234 | orchestrator | 2025-05-19 20:24:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:20.007129 | orchestrator | 2025-05-19 20:24:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:20.007237 | orchestrator | 2025-05-19 20:24:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:23.052343 | orchestrator | 2025-05-19 20:24:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:23.052457 | orchestrator | 2025-05-19 20:24:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:26.089464 | orchestrator | 2025-05-19 20:24:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:26.089582 | orchestrator | 2025-05-19 20:24:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:29.135733 | orchestrator | 2025-05-19 20:24:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:29.135876 | orchestrator | 2025-05-19 20:24:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:32.173835 | orchestrator | 2025-05-19 20:24:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:32.173941 | orchestrator | 2025-05-19 20:24:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:35.216758 | orchestrator | 2025-05-19 20:24:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:35.216879 | orchestrator | 2025-05-19 20:24:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:38.259916 | orchestrator | 2025-05-19 20:24:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:38.260025 | orchestrator | 2025-05-19 20:24:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:41.307117 | orchestrator | 2025-05-19 20:24:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:41.307230 | orchestrator | 2025-05-19 20:24:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:44.357312 | orchestrator | 2025-05-19 20:24:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:44.357417 | orchestrator | 2025-05-19 20:24:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:47.404897 | orchestrator | 2025-05-19 20:24:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:47.405014 | orchestrator | 2025-05-19 20:24:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:50.449037 | orchestrator | 2025-05-19 20:24:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:50.449159 | orchestrator | 2025-05-19 20:24:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:53.498671 | orchestrator | 2025-05-19 20:24:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:53.498791 | orchestrator | 2025-05-19 20:24:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:56.546299 | orchestrator | 2025-05-19 20:24:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:56.546460 | orchestrator | 2025-05-19 20:24:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:24:59.598892 | orchestrator | 2025-05-19 20:24:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:24:59.598987 | orchestrator | 2025-05-19 20:24:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:02.650517 | orchestrator | 2025-05-19 20:25:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:02.650652 | orchestrator | 2025-05-19 20:25:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:05.707352 | orchestrator | 2025-05-19 20:25:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:05.707495 | orchestrator | 2025-05-19 20:25:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:08.759175 | orchestrator | 2025-05-19 20:25:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:08.759297 | orchestrator | 2025-05-19 20:25:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:11.812663 | orchestrator | 2025-05-19 20:25:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:11.812765 | orchestrator | 2025-05-19 20:25:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:14.868818 | orchestrator | 2025-05-19 20:25:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:14.868898 | orchestrator | 2025-05-19 20:25:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:17.915174 | orchestrator | 2025-05-19 20:25:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:17.915309 | orchestrator | 2025-05-19 20:25:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:20.957909 | orchestrator | 2025-05-19 20:25:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:20.958013 | orchestrator | 2025-05-19 20:25:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:24.007812 | orchestrator | 2025-05-19 20:25:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:24.007933 | orchestrator | 2025-05-19 20:25:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:27.060455 | orchestrator | 2025-05-19 20:25:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:27.060556 | orchestrator | 2025-05-19 20:25:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:30.098468 | orchestrator | 2025-05-19 20:25:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:30.098746 | orchestrator | 2025-05-19 20:25:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:33.142139 | orchestrator | 2025-05-19 20:25:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:33.142225 | orchestrator | 2025-05-19 20:25:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:36.187282 | orchestrator | 2025-05-19 20:25:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:36.187418 | orchestrator | 2025-05-19 20:25:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:39.228539 | orchestrator | 2025-05-19 20:25:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:39.228696 | orchestrator | 2025-05-19 20:25:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:42.275295 | orchestrator | 2025-05-19 20:25:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:42.275411 | orchestrator | 2025-05-19 20:25:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:45.324913 | orchestrator | 2025-05-19 20:25:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:45.325009 | orchestrator | 2025-05-19 20:25:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:48.371088 | orchestrator | 2025-05-19 20:25:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:48.371190 | orchestrator | 2025-05-19 20:25:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:51.419194 | orchestrator | 2025-05-19 20:25:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:51.419294 | orchestrator | 2025-05-19 20:25:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:54.466567 | orchestrator | 2025-05-19 20:25:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:54.466716 | orchestrator | 2025-05-19 20:25:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:25:57.510786 | orchestrator | 2025-05-19 20:25:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:25:57.510886 | orchestrator | 2025-05-19 20:25:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:00.553381 | orchestrator | 2025-05-19 20:26:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:00.553485 | orchestrator | 2025-05-19 20:26:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:03.606851 | orchestrator | 2025-05-19 20:26:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:03.606958 | orchestrator | 2025-05-19 20:26:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:06.655070 | orchestrator | 2025-05-19 20:26:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:06.655168 | orchestrator | 2025-05-19 20:26:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:09.699090 | orchestrator | 2025-05-19 20:26:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:09.699192 | orchestrator | 2025-05-19 20:26:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:12.747183 | orchestrator | 2025-05-19 20:26:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:12.747328 | orchestrator | 2025-05-19 20:26:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:15.792998 | orchestrator | 2025-05-19 20:26:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:15.793091 | orchestrator | 2025-05-19 20:26:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:18.836462 | orchestrator | 2025-05-19 20:26:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:18.836592 | orchestrator | 2025-05-19 20:26:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:21.884121 | orchestrator | 2025-05-19 20:26:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:21.884230 | orchestrator | 2025-05-19 20:26:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:24.937579 | orchestrator | 2025-05-19 20:26:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:24.937718 | orchestrator | 2025-05-19 20:26:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:27.991419 | orchestrator | 2025-05-19 20:26:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:27.991534 | orchestrator | 2025-05-19 20:26:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:31.039014 | orchestrator | 2025-05-19 20:26:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:31.039121 | orchestrator | 2025-05-19 20:26:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:34.095074 | orchestrator | 2025-05-19 20:26:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:34.095150 | orchestrator | 2025-05-19 20:26:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:37.135960 | orchestrator | 2025-05-19 20:26:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:37.136069 | orchestrator | 2025-05-19 20:26:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:40.178140 | orchestrator | 2025-05-19 20:26:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:40.178334 | orchestrator | 2025-05-19 20:26:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:43.227129 | orchestrator | 2025-05-19 20:26:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:43.227300 | orchestrator | 2025-05-19 20:26:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:46.274709 | orchestrator | 2025-05-19 20:26:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:46.274913 | orchestrator | 2025-05-19 20:26:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:49.324990 | orchestrator | 2025-05-19 20:26:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:49.325118 | orchestrator | 2025-05-19 20:26:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:52.367412 | orchestrator | 2025-05-19 20:26:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:52.367521 | orchestrator | 2025-05-19 20:26:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:55.415791 | orchestrator | 2025-05-19 20:26:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:55.415919 | orchestrator | 2025-05-19 20:26:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:26:58.469477 | orchestrator | 2025-05-19 20:26:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:26:58.469591 | orchestrator | 2025-05-19 20:26:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:01.517539 | orchestrator | 2025-05-19 20:27:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:01.517644 | orchestrator | 2025-05-19 20:27:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:04.559115 | orchestrator | 2025-05-19 20:27:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:04.559258 | orchestrator | 2025-05-19 20:27:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:07.607111 | orchestrator | 2025-05-19 20:27:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:07.607224 | orchestrator | 2025-05-19 20:27:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:10.657424 | orchestrator | 2025-05-19 20:27:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:10.657556 | orchestrator | 2025-05-19 20:27:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:13.708193 | orchestrator | 2025-05-19 20:27:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:13.708317 | orchestrator | 2025-05-19 20:27:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:16.759932 | orchestrator | 2025-05-19 20:27:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:16.760039 | orchestrator | 2025-05-19 20:27:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:19.812230 | orchestrator | 2025-05-19 20:27:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:19.812341 | orchestrator | 2025-05-19 20:27:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:22.859600 | orchestrator | 2025-05-19 20:27:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:22.859680 | orchestrator | 2025-05-19 20:27:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:25.914276 | orchestrator | 2025-05-19 20:27:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:25.914448 | orchestrator | 2025-05-19 20:27:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:28.966686 | orchestrator | 2025-05-19 20:27:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:28.966845 | orchestrator | 2025-05-19 20:27:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:32.018985 | orchestrator | 2025-05-19 20:27:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:32.019100 | orchestrator | 2025-05-19 20:27:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:35.063501 | orchestrator | 2025-05-19 20:27:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:35.063588 | orchestrator | 2025-05-19 20:27:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:38.101148 | orchestrator | 2025-05-19 20:27:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:38.101234 | orchestrator | 2025-05-19 20:27:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:41.142824 | orchestrator | 2025-05-19 20:27:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:41.142991 | orchestrator | 2025-05-19 20:27:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:44.189205 | orchestrator | 2025-05-19 20:27:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:44.189313 | orchestrator | 2025-05-19 20:27:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:47.235772 | orchestrator | 2025-05-19 20:27:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:47.235928 | orchestrator | 2025-05-19 20:27:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:50.279622 | orchestrator | 2025-05-19 20:27:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:50.279751 | orchestrator | 2025-05-19 20:27:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:53.328819 | orchestrator | 2025-05-19 20:27:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:53.328994 | orchestrator | 2025-05-19 20:27:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:56.383735 | orchestrator | 2025-05-19 20:27:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:56.383843 | orchestrator | 2025-05-19 20:27:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:27:59.429300 | orchestrator | 2025-05-19 20:27:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:27:59.429822 | orchestrator | 2025-05-19 20:27:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:02.482437 | orchestrator | 2025-05-19 20:28:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:02.482544 | orchestrator | 2025-05-19 20:28:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:05.533953 | orchestrator | 2025-05-19 20:28:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:05.534063 | orchestrator | 2025-05-19 20:28:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:08.586324 | orchestrator | 2025-05-19 20:28:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:08.586420 | orchestrator | 2025-05-19 20:28:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:11.632338 | orchestrator | 2025-05-19 20:28:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:11.633080 | orchestrator | 2025-05-19 20:28:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:14.676465 | orchestrator | 2025-05-19 20:28:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:14.676561 | orchestrator | 2025-05-19 20:28:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:17.725126 | orchestrator | 2025-05-19 20:28:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:17.725265 | orchestrator | 2025-05-19 20:28:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:20.772101 | orchestrator | 2025-05-19 20:28:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:20.772233 | orchestrator | 2025-05-19 20:28:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:23.820822 | orchestrator | 2025-05-19 20:28:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:23.821046 | orchestrator | 2025-05-19 20:28:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:26.871591 | orchestrator | 2025-05-19 20:28:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:26.871701 | orchestrator | 2025-05-19 20:28:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:29.921674 | orchestrator | 2025-05-19 20:28:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:29.921784 | orchestrator | 2025-05-19 20:28:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:32.963643 | orchestrator | 2025-05-19 20:28:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:32.963764 | orchestrator | 2025-05-19 20:28:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:36.006867 | orchestrator | 2025-05-19 20:28:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:36.007053 | orchestrator | 2025-05-19 20:28:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:39.050054 | orchestrator | 2025-05-19 20:28:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:39.050156 | orchestrator | 2025-05-19 20:28:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:42.099043 | orchestrator | 2025-05-19 20:28:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:42.099151 | orchestrator | 2025-05-19 20:28:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:45.145164 | orchestrator | 2025-05-19 20:28:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:45.145243 | orchestrator | 2025-05-19 20:28:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:48.194863 | orchestrator | 2025-05-19 20:28:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:48.195031 | orchestrator | 2025-05-19 20:28:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:51.247539 | orchestrator | 2025-05-19 20:28:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:51.247654 | orchestrator | 2025-05-19 20:28:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:54.300352 | orchestrator | 2025-05-19 20:28:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:54.300470 | orchestrator | 2025-05-19 20:28:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:28:57.346455 | orchestrator | 2025-05-19 20:28:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:28:57.346576 | orchestrator | 2025-05-19 20:28:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:00.389948 | orchestrator | 2025-05-19 20:29:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:00.390073 | orchestrator | 2025-05-19 20:29:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:03.440866 | orchestrator | 2025-05-19 20:29:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:03.441134 | orchestrator | 2025-05-19 20:29:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:06.489396 | orchestrator | 2025-05-19 20:29:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:06.489471 | orchestrator | 2025-05-19 20:29:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:09.546875 | orchestrator | 2025-05-19 20:29:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:09.547073 | orchestrator | 2025-05-19 20:29:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:12.595192 | orchestrator | 2025-05-19 20:29:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:12.595296 | orchestrator | 2025-05-19 20:29:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:15.646328 | orchestrator | 2025-05-19 20:29:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:15.646440 | orchestrator | 2025-05-19 20:29:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:18.695857 | orchestrator | 2025-05-19 20:29:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:18.696073 | orchestrator | 2025-05-19 20:29:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:21.742651 | orchestrator | 2025-05-19 20:29:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:21.742799 | orchestrator | 2025-05-19 20:29:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:24.795456 | orchestrator | 2025-05-19 20:29:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:24.795597 | orchestrator | 2025-05-19 20:29:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:27.844603 | orchestrator | 2025-05-19 20:29:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:27.844889 | orchestrator | 2025-05-19 20:29:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:30.891620 | orchestrator | 2025-05-19 20:29:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:30.892738 | orchestrator | 2025-05-19 20:29:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:33.937124 | orchestrator | 2025-05-19 20:29:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:33.937236 | orchestrator | 2025-05-19 20:29:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:36.977489 | orchestrator | 2025-05-19 20:29:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:36.977598 | orchestrator | 2025-05-19 20:29:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:40.030843 | orchestrator | 2025-05-19 20:29:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:40.031047 | orchestrator | 2025-05-19 20:29:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:43.083522 | orchestrator | 2025-05-19 20:29:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:43.083609 | orchestrator | 2025-05-19 20:29:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:46.124571 | orchestrator | 2025-05-19 20:29:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:46.124667 | orchestrator | 2025-05-19 20:29:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:49.173643 | orchestrator | 2025-05-19 20:29:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:49.173743 | orchestrator | 2025-05-19 20:29:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:52.218627 | orchestrator | 2025-05-19 20:29:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:52.218738 | orchestrator | 2025-05-19 20:29:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:55.268870 | orchestrator | 2025-05-19 20:29:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:55.269024 | orchestrator | 2025-05-19 20:29:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:29:58.313840 | orchestrator | 2025-05-19 20:29:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:29:58.371875 | orchestrator | 2025-05-19 20:29:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:01.357439 | orchestrator | 2025-05-19 20:30:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:01.357531 | orchestrator | 2025-05-19 20:30:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:04.410089 | orchestrator | 2025-05-19 20:30:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:04.410207 | orchestrator | 2025-05-19 20:30:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:07.461929 | orchestrator | 2025-05-19 20:30:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:07.462144 | orchestrator | 2025-05-19 20:30:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:10.515077 | orchestrator | 2025-05-19 20:30:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:10.515184 | orchestrator | 2025-05-19 20:30:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:13.566729 | orchestrator | 2025-05-19 20:30:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:13.566827 | orchestrator | 2025-05-19 20:30:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:16.618177 | orchestrator | 2025-05-19 20:30:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:16.618289 | orchestrator | 2025-05-19 20:30:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:19.666279 | orchestrator | 2025-05-19 20:30:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:19.666390 | orchestrator | 2025-05-19 20:30:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:22.718285 | orchestrator | 2025-05-19 20:30:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:22.718367 | orchestrator | 2025-05-19 20:30:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:25.764618 | orchestrator | 2025-05-19 20:30:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:25.764757 | orchestrator | 2025-05-19 20:30:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:28.809541 | orchestrator | 2025-05-19 20:30:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:28.810815 | orchestrator | 2025-05-19 20:30:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:31.859258 | orchestrator | 2025-05-19 20:30:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:31.859357 | orchestrator | 2025-05-19 20:30:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:34.910574 | orchestrator | 2025-05-19 20:30:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:34.910748 | orchestrator | 2025-05-19 20:30:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:37.956938 | orchestrator | 2025-05-19 20:30:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:37.957094 | orchestrator | 2025-05-19 20:30:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:41.019933 | orchestrator | 2025-05-19 20:30:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:41.020139 | orchestrator | 2025-05-19 20:30:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:44.068089 | orchestrator | 2025-05-19 20:30:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:44.068206 | orchestrator | 2025-05-19 20:30:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:47.115747 | orchestrator | 2025-05-19 20:30:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:47.115855 | orchestrator | 2025-05-19 20:30:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:50.155570 | orchestrator | 2025-05-19 20:30:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:50.155677 | orchestrator | 2025-05-19 20:30:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:53.194949 | orchestrator | 2025-05-19 20:30:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:53.195184 | orchestrator | 2025-05-19 20:30:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:56.236456 | orchestrator | 2025-05-19 20:30:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:56.236559 | orchestrator | 2025-05-19 20:30:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:30:59.282384 | orchestrator | 2025-05-19 20:30:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:30:59.282483 | orchestrator | 2025-05-19 20:30:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:02.333505 | orchestrator | 2025-05-19 20:31:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:02.333617 | orchestrator | 2025-05-19 20:31:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:05.381544 | orchestrator | 2025-05-19 20:31:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:05.381639 | orchestrator | 2025-05-19 20:31:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:08.427565 | orchestrator | 2025-05-19 20:31:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:08.427708 | orchestrator | 2025-05-19 20:31:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:11.479881 | orchestrator | 2025-05-19 20:31:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:11.480004 | orchestrator | 2025-05-19 20:31:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:14.524645 | orchestrator | 2025-05-19 20:31:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:14.524709 | orchestrator | 2025-05-19 20:31:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:17.568820 | orchestrator | 2025-05-19 20:31:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:17.568927 | orchestrator | 2025-05-19 20:31:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:20.612212 | orchestrator | 2025-05-19 20:31:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:20.612338 | orchestrator | 2025-05-19 20:31:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:23.662428 | orchestrator | 2025-05-19 20:31:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:23.662540 | orchestrator | 2025-05-19 20:31:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:26.710177 | orchestrator | 2025-05-19 20:31:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:26.710287 | orchestrator | 2025-05-19 20:31:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:29.763534 | orchestrator | 2025-05-19 20:31:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:29.763643 | orchestrator | 2025-05-19 20:31:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:32.810457 | orchestrator | 2025-05-19 20:31:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:32.810571 | orchestrator | 2025-05-19 20:31:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:35.855009 | orchestrator | 2025-05-19 20:31:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:35.855285 | orchestrator | 2025-05-19 20:31:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:38.900723 | orchestrator | 2025-05-19 20:31:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:38.900854 | orchestrator | 2025-05-19 20:31:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:41.948674 | orchestrator | 2025-05-19 20:31:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:41.948783 | orchestrator | 2025-05-19 20:31:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:45.002593 | orchestrator | 2025-05-19 20:31:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:45.005294 | orchestrator | 2025-05-19 20:31:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:48.055744 | orchestrator | 2025-05-19 20:31:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:48.055838 | orchestrator | 2025-05-19 20:31:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:51.099772 | orchestrator | 2025-05-19 20:31:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:51.099882 | orchestrator | 2025-05-19 20:31:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:54.148703 | orchestrator | 2025-05-19 20:31:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:54.148819 | orchestrator | 2025-05-19 20:31:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:31:57.193112 | orchestrator | 2025-05-19 20:31:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:31:57.193218 | orchestrator | 2025-05-19 20:31:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:00.229285 | orchestrator | 2025-05-19 20:32:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:00.229396 | orchestrator | 2025-05-19 20:32:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:03.278772 | orchestrator | 2025-05-19 20:32:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:03.278848 | orchestrator | 2025-05-19 20:32:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:06.331127 | orchestrator | 2025-05-19 20:32:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:06.331240 | orchestrator | 2025-05-19 20:32:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:09.376556 | orchestrator | 2025-05-19 20:32:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:09.376632 | orchestrator | 2025-05-19 20:32:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:12.421593 | orchestrator | 2025-05-19 20:32:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:12.421717 | orchestrator | 2025-05-19 20:32:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:15.474009 | orchestrator | 2025-05-19 20:32:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:15.474285 | orchestrator | 2025-05-19 20:32:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:18.519980 | orchestrator | 2025-05-19 20:32:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:18.520143 | orchestrator | 2025-05-19 20:32:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:21.575929 | orchestrator | 2025-05-19 20:32:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:21.576047 | orchestrator | 2025-05-19 20:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:24.624661 | orchestrator | 2025-05-19 20:32:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:24.624831 | orchestrator | 2025-05-19 20:32:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:27.685909 | orchestrator | 2025-05-19 20:32:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:27.686012 | orchestrator | 2025-05-19 20:32:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:30.735282 | orchestrator | 2025-05-19 20:32:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:30.735386 | orchestrator | 2025-05-19 20:32:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:33.779877 | orchestrator | 2025-05-19 20:32:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:33.780639 | orchestrator | 2025-05-19 20:32:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:36.830507 | orchestrator | 2025-05-19 20:32:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:36.830619 | orchestrator | 2025-05-19 20:32:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:39.880921 | orchestrator | 2025-05-19 20:32:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:39.881032 | orchestrator | 2025-05-19 20:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:42.929892 | orchestrator | 2025-05-19 20:32:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:42.930170 | orchestrator | 2025-05-19 20:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:45.982697 | orchestrator | 2025-05-19 20:32:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:45.982810 | orchestrator | 2025-05-19 20:32:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:49.029959 | orchestrator | 2025-05-19 20:32:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:49.030177 | orchestrator | 2025-05-19 20:32:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:52.081070 | orchestrator | 2025-05-19 20:32:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:52.081169 | orchestrator | 2025-05-19 20:32:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:55.121510 | orchestrator | 2025-05-19 20:32:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:55.121627 | orchestrator | 2025-05-19 20:32:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:32:58.166311 | orchestrator | 2025-05-19 20:32:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:32:58.166422 | orchestrator | 2025-05-19 20:32:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:01.208527 | orchestrator | 2025-05-19 20:33:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:01.208640 | orchestrator | 2025-05-19 20:33:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:04.259025 | orchestrator | 2025-05-19 20:33:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:04.259193 | orchestrator | 2025-05-19 20:33:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:07.308635 | orchestrator | 2025-05-19 20:33:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:07.308778 | orchestrator | 2025-05-19 20:33:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:10.345591 | orchestrator | 2025-05-19 20:33:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:10.345732 | orchestrator | 2025-05-19 20:33:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:13.385669 | orchestrator | 2025-05-19 20:33:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:13.385795 | orchestrator | 2025-05-19 20:33:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:16.437209 | orchestrator | 2025-05-19 20:33:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:16.437316 | orchestrator | 2025-05-19 20:33:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:19.490632 | orchestrator | 2025-05-19 20:33:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:19.490685 | orchestrator | 2025-05-19 20:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:22.535719 | orchestrator | 2025-05-19 20:33:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:22.535785 | orchestrator | 2025-05-19 20:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:25.584828 | orchestrator | 2025-05-19 20:33:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:25.584913 | orchestrator | 2025-05-19 20:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:28.632007 | orchestrator | 2025-05-19 20:33:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:28.632214 | orchestrator | 2025-05-19 20:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:31.688540 | orchestrator | 2025-05-19 20:33:31 | INFO  | Task 9b171edc-8761-4332-8c03-0b588d89225a is in state STARTED 2025-05-19 20:33:31.689612 | orchestrator | 2025-05-19 20:33:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:31.689831 | orchestrator | 2025-05-19 20:33:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:34.749459 | orchestrator | 2025-05-19 20:33:34 | INFO  | Task 9b171edc-8761-4332-8c03-0b588d89225a is in state STARTED 2025-05-19 20:33:34.750437 | orchestrator | 2025-05-19 20:33:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:34.750789 | orchestrator | 2025-05-19 20:33:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:37.806943 | orchestrator | 2025-05-19 20:33:37 | INFO  | Task 9b171edc-8761-4332-8c03-0b588d89225a is in state STARTED 2025-05-19 20:33:37.808618 | orchestrator | 2025-05-19 20:33:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:37.811040 | orchestrator | 2025-05-19 20:33:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:40.856676 | orchestrator | 2025-05-19 20:33:40 | INFO  | Task 9b171edc-8761-4332-8c03-0b588d89225a is in state SUCCESS 2025-05-19 20:33:40.858471 | orchestrator | 2025-05-19 20:33:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:40.858506 | orchestrator | 2025-05-19 20:33:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:43.905353 | orchestrator | 2025-05-19 20:33:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:43.905481 | orchestrator | 2025-05-19 20:33:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:46.958063 | orchestrator | 2025-05-19 20:33:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:46.958190 | orchestrator | 2025-05-19 20:33:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:50.008426 | orchestrator | 2025-05-19 20:33:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:50.008607 | orchestrator | 2025-05-19 20:33:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:53.066921 | orchestrator | 2025-05-19 20:33:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:53.067026 | orchestrator | 2025-05-19 20:33:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:56.123625 | orchestrator | 2025-05-19 20:33:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:56.123740 | orchestrator | 2025-05-19 20:33:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:33:59.170671 | orchestrator | 2025-05-19 20:33:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:33:59.170775 | orchestrator | 2025-05-19 20:33:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:02.218204 | orchestrator | 2025-05-19 20:34:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:02.218339 | orchestrator | 2025-05-19 20:34:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:05.261234 | orchestrator | 2025-05-19 20:34:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:05.261339 | orchestrator | 2025-05-19 20:34:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:08.307986 | orchestrator | 2025-05-19 20:34:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:08.308129 | orchestrator | 2025-05-19 20:34:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:11.352550 | orchestrator | 2025-05-19 20:34:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:11.352628 | orchestrator | 2025-05-19 20:34:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:14.404531 | orchestrator | 2025-05-19 20:34:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:14.404650 | orchestrator | 2025-05-19 20:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:17.453394 | orchestrator | 2025-05-19 20:34:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:17.453507 | orchestrator | 2025-05-19 20:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:20.504114 | orchestrator | 2025-05-19 20:34:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:20.504259 | orchestrator | 2025-05-19 20:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:23.548286 | orchestrator | 2025-05-19 20:34:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:23.548411 | orchestrator | 2025-05-19 20:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:26.587301 | orchestrator | 2025-05-19 20:34:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:26.588232 | orchestrator | 2025-05-19 20:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:29.632008 | orchestrator | 2025-05-19 20:34:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:29.632112 | orchestrator | 2025-05-19 20:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:32.682880 | orchestrator | 2025-05-19 20:34:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:32.682993 | orchestrator | 2025-05-19 20:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:35.735315 | orchestrator | 2025-05-19 20:34:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:35.735421 | orchestrator | 2025-05-19 20:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:38.782343 | orchestrator | 2025-05-19 20:34:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:38.782462 | orchestrator | 2025-05-19 20:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:41.834600 | orchestrator | 2025-05-19 20:34:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:41.834708 | orchestrator | 2025-05-19 20:34:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:44.885781 | orchestrator | 2025-05-19 20:34:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:44.886155 | orchestrator | 2025-05-19 20:34:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:47.936083 | orchestrator | 2025-05-19 20:34:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:47.936236 | orchestrator | 2025-05-19 20:34:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:50.987374 | orchestrator | 2025-05-19 20:34:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:50.987454 | orchestrator | 2025-05-19 20:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:54.039088 | orchestrator | 2025-05-19 20:34:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:54.039260 | orchestrator | 2025-05-19 20:34:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:34:57.076061 | orchestrator | 2025-05-19 20:34:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:34:57.076205 | orchestrator | 2025-05-19 20:34:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:00.115875 | orchestrator | 2025-05-19 20:35:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:00.116247 | orchestrator | 2025-05-19 20:35:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:03.165959 | orchestrator | 2025-05-19 20:35:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:03.166167 | orchestrator | 2025-05-19 20:35:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:06.206357 | orchestrator | 2025-05-19 20:35:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:06.206448 | orchestrator | 2025-05-19 20:35:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:09.251749 | orchestrator | 2025-05-19 20:35:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:09.251856 | orchestrator | 2025-05-19 20:35:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:12.300394 | orchestrator | 2025-05-19 20:35:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:12.300503 | orchestrator | 2025-05-19 20:35:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:15.347423 | orchestrator | 2025-05-19 20:35:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:15.347540 | orchestrator | 2025-05-19 20:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:18.397951 | orchestrator | 2025-05-19 20:35:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:18.398120 | orchestrator | 2025-05-19 20:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:21.449274 | orchestrator | 2025-05-19 20:35:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:21.449388 | orchestrator | 2025-05-19 20:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:24.501134 | orchestrator | 2025-05-19 20:35:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:24.501250 | orchestrator | 2025-05-19 20:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:27.549162 | orchestrator | 2025-05-19 20:35:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:27.549318 | orchestrator | 2025-05-19 20:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:30.599784 | orchestrator | 2025-05-19 20:35:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:30.599900 | orchestrator | 2025-05-19 20:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:33.648304 | orchestrator | 2025-05-19 20:35:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:33.649321 | orchestrator | 2025-05-19 20:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:36.694481 | orchestrator | 2025-05-19 20:35:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:36.694589 | orchestrator | 2025-05-19 20:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:39.753964 | orchestrator | 2025-05-19 20:35:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:39.754178 | orchestrator | 2025-05-19 20:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:42.796526 | orchestrator | 2025-05-19 20:35:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:42.796653 | orchestrator | 2025-05-19 20:35:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:45.841311 | orchestrator | 2025-05-19 20:35:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:45.841419 | orchestrator | 2025-05-19 20:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:48.886978 | orchestrator | 2025-05-19 20:35:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:48.887081 | orchestrator | 2025-05-19 20:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:51.932630 | orchestrator | 2025-05-19 20:35:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:51.932752 | orchestrator | 2025-05-19 20:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:54.980411 | orchestrator | 2025-05-19 20:35:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:54.980519 | orchestrator | 2025-05-19 20:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:35:58.038514 | orchestrator | 2025-05-19 20:35:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:35:58.038650 | orchestrator | 2025-05-19 20:35:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:01.075320 | orchestrator | 2025-05-19 20:36:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:01.075436 | orchestrator | 2025-05-19 20:36:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:04.115207 | orchestrator | 2025-05-19 20:36:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:04.115324 | orchestrator | 2025-05-19 20:36:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:07.170450 | orchestrator | 2025-05-19 20:36:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:07.170564 | orchestrator | 2025-05-19 20:36:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:10.221627 | orchestrator | 2025-05-19 20:36:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:10.221739 | orchestrator | 2025-05-19 20:36:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:13.269393 | orchestrator | 2025-05-19 20:36:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:13.269497 | orchestrator | 2025-05-19 20:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:16.319390 | orchestrator | 2025-05-19 20:36:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:16.319527 | orchestrator | 2025-05-19 20:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:19.365710 | orchestrator | 2025-05-19 20:36:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:19.365809 | orchestrator | 2025-05-19 20:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:22.412243 | orchestrator | 2025-05-19 20:36:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:22.412388 | orchestrator | 2025-05-19 20:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:25.466921 | orchestrator | 2025-05-19 20:36:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:25.467002 | orchestrator | 2025-05-19 20:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:28.519590 | orchestrator | 2025-05-19 20:36:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:28.520767 | orchestrator | 2025-05-19 20:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:31.569409 | orchestrator | 2025-05-19 20:36:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:31.569521 | orchestrator | 2025-05-19 20:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:34.632938 | orchestrator | 2025-05-19 20:36:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:34.634125 | orchestrator | 2025-05-19 20:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:37.680660 | orchestrator | 2025-05-19 20:36:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:37.680990 | orchestrator | 2025-05-19 20:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:40.740179 | orchestrator | 2025-05-19 20:36:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:40.740358 | orchestrator | 2025-05-19 20:36:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:43.796754 | orchestrator | 2025-05-19 20:36:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:43.796886 | orchestrator | 2025-05-19 20:36:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:46.839813 | orchestrator | 2025-05-19 20:36:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:46.839922 | orchestrator | 2025-05-19 20:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:49.894901 | orchestrator | 2025-05-19 20:36:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:49.895003 | orchestrator | 2025-05-19 20:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:52.936222 | orchestrator | 2025-05-19 20:36:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:52.936382 | orchestrator | 2025-05-19 20:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:55.979509 | orchestrator | 2025-05-19 20:36:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:55.979637 | orchestrator | 2025-05-19 20:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:36:59.036139 | orchestrator | 2025-05-19 20:36:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:36:59.036257 | orchestrator | 2025-05-19 20:36:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:02.088661 | orchestrator | 2025-05-19 20:37:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:02.088773 | orchestrator | 2025-05-19 20:37:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:05.154850 | orchestrator | 2025-05-19 20:37:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:05.154954 | orchestrator | 2025-05-19 20:37:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:08.203657 | orchestrator | 2025-05-19 20:37:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:08.203765 | orchestrator | 2025-05-19 20:37:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:11.250681 | orchestrator | 2025-05-19 20:37:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:11.250806 | orchestrator | 2025-05-19 20:37:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:14.301102 | orchestrator | 2025-05-19 20:37:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:14.301212 | orchestrator | 2025-05-19 20:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:17.348774 | orchestrator | 2025-05-19 20:37:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:17.348880 | orchestrator | 2025-05-19 20:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:20.392857 | orchestrator | 2025-05-19 20:37:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:20.392959 | orchestrator | 2025-05-19 20:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:23.443672 | orchestrator | 2025-05-19 20:37:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:23.443793 | orchestrator | 2025-05-19 20:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:26.491733 | orchestrator | 2025-05-19 20:37:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:26.491843 | orchestrator | 2025-05-19 20:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:29.538526 | orchestrator | 2025-05-19 20:37:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:29.538625 | orchestrator | 2025-05-19 20:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:32.587347 | orchestrator | 2025-05-19 20:37:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:32.587506 | orchestrator | 2025-05-19 20:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:35.630316 | orchestrator | 2025-05-19 20:37:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:35.630462 | orchestrator | 2025-05-19 20:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:38.673900 | orchestrator | 2025-05-19 20:37:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:38.674920 | orchestrator | 2025-05-19 20:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:41.724767 | orchestrator | 2025-05-19 20:37:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:41.724877 | orchestrator | 2025-05-19 20:37:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:44.771511 | orchestrator | 2025-05-19 20:37:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:44.771609 | orchestrator | 2025-05-19 20:37:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:47.815326 | orchestrator | 2025-05-19 20:37:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:47.815456 | orchestrator | 2025-05-19 20:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:50.862910 | orchestrator | 2025-05-19 20:37:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:50.863013 | orchestrator | 2025-05-19 20:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:53.914116 | orchestrator | 2025-05-19 20:37:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:53.914237 | orchestrator | 2025-05-19 20:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:37:56.957594 | orchestrator | 2025-05-19 20:37:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:37:56.957700 | orchestrator | 2025-05-19 20:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:00.012010 | orchestrator | 2025-05-19 20:38:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:00.012139 | orchestrator | 2025-05-19 20:38:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:03.061755 | orchestrator | 2025-05-19 20:38:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:03.061895 | orchestrator | 2025-05-19 20:38:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:06.108986 | orchestrator | 2025-05-19 20:38:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:06.109087 | orchestrator | 2025-05-19 20:38:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:09.154003 | orchestrator | 2025-05-19 20:38:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:09.154179 | orchestrator | 2025-05-19 20:38:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:12.201080 | orchestrator | 2025-05-19 20:38:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:12.201180 | orchestrator | 2025-05-19 20:38:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:15.247854 | orchestrator | 2025-05-19 20:38:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:15.247964 | orchestrator | 2025-05-19 20:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:18.291312 | orchestrator | 2025-05-19 20:38:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:18.291478 | orchestrator | 2025-05-19 20:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:21.333467 | orchestrator | 2025-05-19 20:38:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:21.333570 | orchestrator | 2025-05-19 20:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:24.382687 | orchestrator | 2025-05-19 20:38:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:24.382791 | orchestrator | 2025-05-19 20:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:27.427858 | orchestrator | 2025-05-19 20:38:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:27.427944 | orchestrator | 2025-05-19 20:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:30.480850 | orchestrator | 2025-05-19 20:38:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:30.480955 | orchestrator | 2025-05-19 20:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:33.527796 | orchestrator | 2025-05-19 20:38:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:33.527875 | orchestrator | 2025-05-19 20:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:36.582314 | orchestrator | 2025-05-19 20:38:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:36.582419 | orchestrator | 2025-05-19 20:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:39.630804 | orchestrator | 2025-05-19 20:38:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:39.630877 | orchestrator | 2025-05-19 20:38:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:42.678307 | orchestrator | 2025-05-19 20:38:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:42.678404 | orchestrator | 2025-05-19 20:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:45.729675 | orchestrator | 2025-05-19 20:38:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:45.729805 | orchestrator | 2025-05-19 20:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:48.777187 | orchestrator | 2025-05-19 20:38:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:48.777310 | orchestrator | 2025-05-19 20:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:51.828421 | orchestrator | 2025-05-19 20:38:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:51.828582 | orchestrator | 2025-05-19 20:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:54.875265 | orchestrator | 2025-05-19 20:38:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:54.875372 | orchestrator | 2025-05-19 20:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:38:57.907841 | orchestrator | 2025-05-19 20:38:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:38:57.907960 | orchestrator | 2025-05-19 20:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:00.958814 | orchestrator | 2025-05-19 20:39:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:00.958914 | orchestrator | 2025-05-19 20:39:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:04.009147 | orchestrator | 2025-05-19 20:39:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:04.009256 | orchestrator | 2025-05-19 20:39:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:07.067612 | orchestrator | 2025-05-19 20:39:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:07.067758 | orchestrator | 2025-05-19 20:39:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:10.117287 | orchestrator | 2025-05-19 20:39:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:10.117405 | orchestrator | 2025-05-19 20:39:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:13.166396 | orchestrator | 2025-05-19 20:39:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:13.167332 | orchestrator | 2025-05-19 20:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:16.203443 | orchestrator | 2025-05-19 20:39:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:16.203551 | orchestrator | 2025-05-19 20:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:19.245586 | orchestrator | 2025-05-19 20:39:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:19.245730 | orchestrator | 2025-05-19 20:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:22.293857 | orchestrator | 2025-05-19 20:39:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:22.293960 | orchestrator | 2025-05-19 20:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:25.340068 | orchestrator | 2025-05-19 20:39:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:25.340176 | orchestrator | 2025-05-19 20:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:28.390829 | orchestrator | 2025-05-19 20:39:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:28.390956 | orchestrator | 2025-05-19 20:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:31.445522 | orchestrator | 2025-05-19 20:39:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:31.445652 | orchestrator | 2025-05-19 20:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:34.496024 | orchestrator | 2025-05-19 20:39:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:34.496144 | orchestrator | 2025-05-19 20:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:37.544440 | orchestrator | 2025-05-19 20:39:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:37.544516 | orchestrator | 2025-05-19 20:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:40.595668 | orchestrator | 2025-05-19 20:39:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:40.595770 | orchestrator | 2025-05-19 20:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:43.644171 | orchestrator | 2025-05-19 20:39:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:43.644249 | orchestrator | 2025-05-19 20:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:46.692994 | orchestrator | 2025-05-19 20:39:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:46.693108 | orchestrator | 2025-05-19 20:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:49.736794 | orchestrator | 2025-05-19 20:39:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:49.736929 | orchestrator | 2025-05-19 20:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:52.788886 | orchestrator | 2025-05-19 20:39:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:52.789025 | orchestrator | 2025-05-19 20:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:55.838276 | orchestrator | 2025-05-19 20:39:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:55.838395 | orchestrator | 2025-05-19 20:39:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:39:58.888844 | orchestrator | 2025-05-19 20:39:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:39:58.888946 | orchestrator | 2025-05-19 20:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:01.936728 | orchestrator | 2025-05-19 20:40:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:01.936844 | orchestrator | 2025-05-19 20:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:04.984921 | orchestrator | 2025-05-19 20:40:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:04.985033 | orchestrator | 2025-05-19 20:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:08.039640 | orchestrator | 2025-05-19 20:40:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:08.039752 | orchestrator | 2025-05-19 20:40:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:11.085625 | orchestrator | 2025-05-19 20:40:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:11.085821 | orchestrator | 2025-05-19 20:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:14.131979 | orchestrator | 2025-05-19 20:40:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:14.132074 | orchestrator | 2025-05-19 20:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:17.184499 | orchestrator | 2025-05-19 20:40:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:17.185408 | orchestrator | 2025-05-19 20:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:20.235183 | orchestrator | 2025-05-19 20:40:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:20.235255 | orchestrator | 2025-05-19 20:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:23.277960 | orchestrator | 2025-05-19 20:40:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:23.278085 | orchestrator | 2025-05-19 20:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:26.330682 | orchestrator | 2025-05-19 20:40:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:26.330757 | orchestrator | 2025-05-19 20:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:29.372153 | orchestrator | 2025-05-19 20:40:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:29.372273 | orchestrator | 2025-05-19 20:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:32.417263 | orchestrator | 2025-05-19 20:40:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:32.417386 | orchestrator | 2025-05-19 20:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:35.472376 | orchestrator | 2025-05-19 20:40:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:35.472456 | orchestrator | 2025-05-19 20:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:38.524928 | orchestrator | 2025-05-19 20:40:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:38.525065 | orchestrator | 2025-05-19 20:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:41.574899 | orchestrator | 2025-05-19 20:40:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:41.575023 | orchestrator | 2025-05-19 20:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:44.625387 | orchestrator | 2025-05-19 20:40:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:44.625485 | orchestrator | 2025-05-19 20:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:47.677313 | orchestrator | 2025-05-19 20:40:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:47.677465 | orchestrator | 2025-05-19 20:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:50.730468 | orchestrator | 2025-05-19 20:40:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:51.347387 | orchestrator | 2025-05-19 20:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:53.783914 | orchestrator | 2025-05-19 20:40:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:53.785428 | orchestrator | 2025-05-19 20:40:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:56.834169 | orchestrator | 2025-05-19 20:40:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:56.837807 | orchestrator | 2025-05-19 20:40:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:40:59.886388 | orchestrator | 2025-05-19 20:40:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:40:59.886511 | orchestrator | 2025-05-19 20:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:02.934348 | orchestrator | 2025-05-19 20:41:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:02.935914 | orchestrator | 2025-05-19 20:41:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:05.977650 | orchestrator | 2025-05-19 20:41:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:05.977777 | orchestrator | 2025-05-19 20:41:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:09.026783 | orchestrator | 2025-05-19 20:41:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:09.026923 | orchestrator | 2025-05-19 20:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:12.077069 | orchestrator | 2025-05-19 20:41:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:12.077204 | orchestrator | 2025-05-19 20:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:15.127237 | orchestrator | 2025-05-19 20:41:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:15.127411 | orchestrator | 2025-05-19 20:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:18.155665 | orchestrator | 2025-05-19 20:41:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:18.155786 | orchestrator | 2025-05-19 20:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:21.205478 | orchestrator | 2025-05-19 20:41:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:21.205613 | orchestrator | 2025-05-19 20:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:24.256456 | orchestrator | 2025-05-19 20:41:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:24.256596 | orchestrator | 2025-05-19 20:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:27.320525 | orchestrator | 2025-05-19 20:41:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:27.322125 | orchestrator | 2025-05-19 20:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:30.362488 | orchestrator | 2025-05-19 20:41:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:30.362586 | orchestrator | 2025-05-19 20:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:33.424090 | orchestrator | 2025-05-19 20:41:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:33.425549 | orchestrator | 2025-05-19 20:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:36.481754 | orchestrator | 2025-05-19 20:41:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:36.481950 | orchestrator | 2025-05-19 20:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:39.536080 | orchestrator | 2025-05-19 20:41:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:39.539002 | orchestrator | 2025-05-19 20:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:42.584264 | orchestrator | 2025-05-19 20:41:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:42.584380 | orchestrator | 2025-05-19 20:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:45.642254 | orchestrator | 2025-05-19 20:41:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:45.646087 | orchestrator | 2025-05-19 20:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:48.698975 | orchestrator | 2025-05-19 20:41:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:48.701652 | orchestrator | 2025-05-19 20:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:51.746014 | orchestrator | 2025-05-19 20:41:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:51.747664 | orchestrator | 2025-05-19 20:41:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:54.798686 | orchestrator | 2025-05-19 20:41:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:54.798799 | orchestrator | 2025-05-19 20:41:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:41:57.852250 | orchestrator | 2025-05-19 20:41:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:41:57.852358 | orchestrator | 2025-05-19 20:41:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:00.909549 | orchestrator | 2025-05-19 20:42:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:00.909645 | orchestrator | 2025-05-19 20:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:03.963551 | orchestrator | 2025-05-19 20:42:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:03.963634 | orchestrator | 2025-05-19 20:42:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:07.013453 | orchestrator | 2025-05-19 20:42:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:07.013551 | orchestrator | 2025-05-19 20:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:10.066174 | orchestrator | 2025-05-19 20:42:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:10.066316 | orchestrator | 2025-05-19 20:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:13.108771 | orchestrator | 2025-05-19 20:42:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:13.108874 | orchestrator | 2025-05-19 20:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:16.150118 | orchestrator | 2025-05-19 20:42:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:16.150264 | orchestrator | 2025-05-19 20:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:19.187952 | orchestrator | 2025-05-19 20:42:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:19.188050 | orchestrator | 2025-05-19 20:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:22.237296 | orchestrator | 2025-05-19 20:42:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:22.237398 | orchestrator | 2025-05-19 20:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:25.288782 | orchestrator | 2025-05-19 20:42:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:25.288885 | orchestrator | 2025-05-19 20:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:28.340132 | orchestrator | 2025-05-19 20:42:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:28.340237 | orchestrator | 2025-05-19 20:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:31.387100 | orchestrator | 2025-05-19 20:42:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:31.387201 | orchestrator | 2025-05-19 20:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:34.427910 | orchestrator | 2025-05-19 20:42:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:34.428032 | orchestrator | 2025-05-19 20:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:37.477460 | orchestrator | 2025-05-19 20:42:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:37.477527 | orchestrator | 2025-05-19 20:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:40.530434 | orchestrator | 2025-05-19 20:42:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:40.530511 | orchestrator | 2025-05-19 20:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:43.580195 | orchestrator | 2025-05-19 20:42:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:43.580271 | orchestrator | 2025-05-19 20:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:46.634878 | orchestrator | 2025-05-19 20:42:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:46.634968 | orchestrator | 2025-05-19 20:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:49.680813 | orchestrator | 2025-05-19 20:42:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:49.680937 | orchestrator | 2025-05-19 20:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:52.734340 | orchestrator | 2025-05-19 20:42:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:52.734446 | orchestrator | 2025-05-19 20:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:55.788100 | orchestrator | 2025-05-19 20:42:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:55.788246 | orchestrator | 2025-05-19 20:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:42:58.836931 | orchestrator | 2025-05-19 20:42:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:42:58.837039 | orchestrator | 2025-05-19 20:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:01.899401 | orchestrator | 2025-05-19 20:43:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:01.899507 | orchestrator | 2025-05-19 20:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:04.954701 | orchestrator | 2025-05-19 20:43:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:04.954856 | orchestrator | 2025-05-19 20:43:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:08.012474 | orchestrator | 2025-05-19 20:43:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:08.012596 | orchestrator | 2025-05-19 20:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:11.063178 | orchestrator | 2025-05-19 20:43:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:11.063292 | orchestrator | 2025-05-19 20:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:14.110750 | orchestrator | 2025-05-19 20:43:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:14.110899 | orchestrator | 2025-05-19 20:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:17.164823 | orchestrator | 2025-05-19 20:43:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:17.164934 | orchestrator | 2025-05-19 20:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:20.217481 | orchestrator | 2025-05-19 20:43:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:20.217587 | orchestrator | 2025-05-19 20:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:23.269993 | orchestrator | 2025-05-19 20:43:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:23.270172 | orchestrator | 2025-05-19 20:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:26.325651 | orchestrator | 2025-05-19 20:43:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:26.325755 | orchestrator | 2025-05-19 20:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:29.371937 | orchestrator | 2025-05-19 20:43:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:29.372062 | orchestrator | 2025-05-19 20:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:32.427759 | orchestrator | 2025-05-19 20:43:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:32.428358 | orchestrator | 2025-05-19 20:43:32 | INFO  | Task 027c27f0-78e6-4696-b2eb-a54c32eaf2f8 is in state STARTED 2025-05-19 20:43:32.428398 | orchestrator | 2025-05-19 20:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:35.485100 | orchestrator | 2025-05-19 20:43:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:35.487261 | orchestrator | 2025-05-19 20:43:35 | INFO  | Task 027c27f0-78e6-4696-b2eb-a54c32eaf2f8 is in state STARTED 2025-05-19 20:43:35.487303 | orchestrator | 2025-05-19 20:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:38.549353 | orchestrator | 2025-05-19 20:43:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:38.550537 | orchestrator | 2025-05-19 20:43:38 | INFO  | Task 027c27f0-78e6-4696-b2eb-a54c32eaf2f8 is in state STARTED 2025-05-19 20:43:38.550571 | orchestrator | 2025-05-19 20:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:41.620234 | orchestrator | 2025-05-19 20:43:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:41.620919 | orchestrator | 2025-05-19 20:43:41 | INFO  | Task 027c27f0-78e6-4696-b2eb-a54c32eaf2f8 is in state SUCCESS 2025-05-19 20:43:41.620973 | orchestrator | 2025-05-19 20:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:44.675225 | orchestrator | 2025-05-19 20:43:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:44.675330 | orchestrator | 2025-05-19 20:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:47.725366 | orchestrator | 2025-05-19 20:43:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:47.725464 | orchestrator | 2025-05-19 20:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:50.764894 | orchestrator | 2025-05-19 20:43:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:50.765017 | orchestrator | 2025-05-19 20:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:53.811197 | orchestrator | 2025-05-19 20:43:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:53.811307 | orchestrator | 2025-05-19 20:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:56.863399 | orchestrator | 2025-05-19 20:43:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:56.863501 | orchestrator | 2025-05-19 20:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:43:59.906924 | orchestrator | 2025-05-19 20:43:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:43:59.907086 | orchestrator | 2025-05-19 20:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:02.958594 | orchestrator | 2025-05-19 20:44:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:02.958705 | orchestrator | 2025-05-19 20:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:06.025062 | orchestrator | 2025-05-19 20:44:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:06.025216 | orchestrator | 2025-05-19 20:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:09.072632 | orchestrator | 2025-05-19 20:44:09 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:09.072730 | orchestrator | 2025-05-19 20:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:12.114628 | orchestrator | 2025-05-19 20:44:12 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:12.114739 | orchestrator | 2025-05-19 20:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:15.165579 | orchestrator | 2025-05-19 20:44:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:15.165686 | orchestrator | 2025-05-19 20:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:18.206426 | orchestrator | 2025-05-19 20:44:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:18.206531 | orchestrator | 2025-05-19 20:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:21.253288 | orchestrator | 2025-05-19 20:44:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:21.253526 | orchestrator | 2025-05-19 20:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:24.303713 | orchestrator | 2025-05-19 20:44:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:24.303823 | orchestrator | 2025-05-19 20:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:27.346373 | orchestrator | 2025-05-19 20:44:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:27.346465 | orchestrator | 2025-05-19 20:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:30.394654 | orchestrator | 2025-05-19 20:44:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:30.394815 | orchestrator | 2025-05-19 20:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:33.444203 | orchestrator | 2025-05-19 20:44:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:33.444310 | orchestrator | 2025-05-19 20:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:36.489461 | orchestrator | 2025-05-19 20:44:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:36.489570 | orchestrator | 2025-05-19 20:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:39.540086 | orchestrator | 2025-05-19 20:44:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:39.540277 | orchestrator | 2025-05-19 20:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:42.584548 | orchestrator | 2025-05-19 20:44:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:42.584660 | orchestrator | 2025-05-19 20:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:45.633179 | orchestrator | 2025-05-19 20:44:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:45.633307 | orchestrator | 2025-05-19 20:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:48.680317 | orchestrator | 2025-05-19 20:44:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:48.680444 | orchestrator | 2025-05-19 20:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:51.730595 | orchestrator | 2025-05-19 20:44:51 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:51.730706 | orchestrator | 2025-05-19 20:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:54.784249 | orchestrator | 2025-05-19 20:44:54 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:54.784402 | orchestrator | 2025-05-19 20:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:44:57.835606 | orchestrator | 2025-05-19 20:44:57 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:44:57.835780 | orchestrator | 2025-05-19 20:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:00.893726 | orchestrator | 2025-05-19 20:45:00 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:00.893836 | orchestrator | 2025-05-19 20:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:03.936869 | orchestrator | 2025-05-19 20:45:03 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:03.936973 | orchestrator | 2025-05-19 20:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:06.982557 | orchestrator | 2025-05-19 20:45:06 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:06.982668 | orchestrator | 2025-05-19 20:45:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:10.038000 | orchestrator | 2025-05-19 20:45:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:10.038153 | orchestrator | 2025-05-19 20:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:13.085614 | orchestrator | 2025-05-19 20:45:13 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:13.085702 | orchestrator | 2025-05-19 20:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:16.127680 | orchestrator | 2025-05-19 20:45:16 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:16.127807 | orchestrator | 2025-05-19 20:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:19.171159 | orchestrator | 2025-05-19 20:45:19 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:19.171248 | orchestrator | 2025-05-19 20:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:22.213027 | orchestrator | 2025-05-19 20:45:22 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:22.213129 | orchestrator | 2025-05-19 20:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:25.264542 | orchestrator | 2025-05-19 20:45:25 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:25.264646 | orchestrator | 2025-05-19 20:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:28.308497 | orchestrator | 2025-05-19 20:45:28 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:28.308582 | orchestrator | 2025-05-19 20:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:31.351067 | orchestrator | 2025-05-19 20:45:31 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:31.351205 | orchestrator | 2025-05-19 20:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:34.392487 | orchestrator | 2025-05-19 20:45:34 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:34.392598 | orchestrator | 2025-05-19 20:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:37.439520 | orchestrator | 2025-05-19 20:45:37 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:37.439629 | orchestrator | 2025-05-19 20:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:40.490650 | orchestrator | 2025-05-19 20:45:40 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:40.490732 | orchestrator | 2025-05-19 20:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:43.544904 | orchestrator | 2025-05-19 20:45:43 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:43.545056 | orchestrator | 2025-05-19 20:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:46.594909 | orchestrator | 2025-05-19 20:45:46 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:46.595117 | orchestrator | 2025-05-19 20:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:49.639789 | orchestrator | 2025-05-19 20:45:49 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:49.639898 | orchestrator | 2025-05-19 20:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:52.690609 | orchestrator | 2025-05-19 20:45:52 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:52.690715 | orchestrator | 2025-05-19 20:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:55.737616 | orchestrator | 2025-05-19 20:45:55 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:55.737726 | orchestrator | 2025-05-19 20:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:45:58.787376 | orchestrator | 2025-05-19 20:45:58 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:45:58.787473 | orchestrator | 2025-05-19 20:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:01.838685 | orchestrator | 2025-05-19 20:46:01 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:01.838789 | orchestrator | 2025-05-19 20:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:04.886533 | orchestrator | 2025-05-19 20:46:04 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:04.886644 | orchestrator | 2025-05-19 20:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:07.936341 | orchestrator | 2025-05-19 20:46:07 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:07.936450 | orchestrator | 2025-05-19 20:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:10.985190 | orchestrator | 2025-05-19 20:46:10 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:10.985279 | orchestrator | 2025-05-19 20:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:14.035656 | orchestrator | 2025-05-19 20:46:14 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:14.035744 | orchestrator | 2025-05-19 20:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:17.080193 | orchestrator | 2025-05-19 20:46:17 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:17.080291 | orchestrator | 2025-05-19 20:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:20.120454 | orchestrator | 2025-05-19 20:46:20 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:20.120557 | orchestrator | 2025-05-19 20:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:23.166956 | orchestrator | 2025-05-19 20:46:23 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:23.167087 | orchestrator | 2025-05-19 20:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:26.210544 | orchestrator | 2025-05-19 20:46:26 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:26.210636 | orchestrator | 2025-05-19 20:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:29.257310 | orchestrator | 2025-05-19 20:46:29 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:29.257424 | orchestrator | 2025-05-19 20:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:32.313150 | orchestrator | 2025-05-19 20:46:32 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:32.313272 | orchestrator | 2025-05-19 20:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:35.351754 | orchestrator | 2025-05-19 20:46:35 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:35.351888 | orchestrator | 2025-05-19 20:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:38.404533 | orchestrator | 2025-05-19 20:46:38 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:38.404645 | orchestrator | 2025-05-19 20:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:41.454719 | orchestrator | 2025-05-19 20:46:41 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:41.454818 | orchestrator | 2025-05-19 20:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:44.510337 | orchestrator | 2025-05-19 20:46:44 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:44.510436 | orchestrator | 2025-05-19 20:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:47.565833 | orchestrator | 2025-05-19 20:46:47 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:47.565924 | orchestrator | 2025-05-19 20:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:50.615173 | orchestrator | 2025-05-19 20:46:50 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:50.615288 | orchestrator | 2025-05-19 20:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:53.665773 | orchestrator | 2025-05-19 20:46:53 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:53.665861 | orchestrator | 2025-05-19 20:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:56.710436 | orchestrator | 2025-05-19 20:46:56 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:56.710549 | orchestrator | 2025-05-19 20:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:46:59.761754 | orchestrator | 2025-05-19 20:46:59 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:46:59.761839 | orchestrator | 2025-05-19 20:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:02.810206 | orchestrator | 2025-05-19 20:47:02 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:02.810338 | orchestrator | 2025-05-19 20:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:05.858850 | orchestrator | 2025-05-19 20:47:05 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:05.858972 | orchestrator | 2025-05-19 20:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:08.907857 | orchestrator | 2025-05-19 20:47:08 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:08.907970 | orchestrator | 2025-05-19 20:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:11.957959 | orchestrator | 2025-05-19 20:47:11 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:11.958181 | orchestrator | 2025-05-19 20:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:15.014879 | orchestrator | 2025-05-19 20:47:15 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:15.015215 | orchestrator | 2025-05-19 20:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:18.064754 | orchestrator | 2025-05-19 20:47:18 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:18.064860 | orchestrator | 2025-05-19 20:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:21.107463 | orchestrator | 2025-05-19 20:47:21 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:21.107574 | orchestrator | 2025-05-19 20:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:24.149638 | orchestrator | 2025-05-19 20:47:24 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:24.149766 | orchestrator | 2025-05-19 20:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:27.194894 | orchestrator | 2025-05-19 20:47:27 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:27.195051 | orchestrator | 2025-05-19 20:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:30.234185 | orchestrator | 2025-05-19 20:47:30 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:30.234291 | orchestrator | 2025-05-19 20:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:33.284281 | orchestrator | 2025-05-19 20:47:33 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:33.284385 | orchestrator | 2025-05-19 20:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:36.331496 | orchestrator | 2025-05-19 20:47:36 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:36.331590 | orchestrator | 2025-05-19 20:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:39.379336 | orchestrator | 2025-05-19 20:47:39 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:39.379437 | orchestrator | 2025-05-19 20:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:42.426380 | orchestrator | 2025-05-19 20:47:42 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:42.426485 | orchestrator | 2025-05-19 20:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:45.474478 | orchestrator | 2025-05-19 20:47:45 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:45.474590 | orchestrator | 2025-05-19 20:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:48.521792 | orchestrator | 2025-05-19 20:47:48 | INFO  | Task 6cbcb477-08de-4f2b-846d-588e50cbe210 is in state STARTED 2025-05-19 20:47:48.521901 | orchestrator | 2025-05-19 20:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 20:47:51.215777 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-19 20:47:51.217614 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-19 20:47:51.988205 | 2025-05-19 20:47:51.988399 | PLAY [Post output play] 2025-05-19 20:47:52.005679 | 2025-05-19 20:47:52.005845 | LOOP [stage-output : Register sources] 2025-05-19 20:47:52.088788 | 2025-05-19 20:47:52.089183 | TASK [stage-output : Check sudo] 2025-05-19 20:47:52.981486 | orchestrator | sudo: a password is required 2025-05-19 20:47:53.132258 | orchestrator | ok: Runtime: 0:00:00.017604 2025-05-19 20:47:53.147168 | 2025-05-19 20:47:53.147357 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-19 20:47:53.189872 | 2025-05-19 20:47:53.190210 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-19 20:47:53.270075 | orchestrator | ok 2025-05-19 20:47:53.279402 | 2025-05-19 20:47:53.279562 | LOOP [stage-output : Ensure target folders exist] 2025-05-19 20:47:53.743688 | orchestrator | ok: "docs" 2025-05-19 20:47:53.744089 | 2025-05-19 20:47:54.006793 | orchestrator | ok: "artifacts" 2025-05-19 20:47:54.266368 | orchestrator | ok: "logs" 2025-05-19 20:47:54.288095 | 2025-05-19 20:47:54.288278 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-19 20:47:54.327224 | 2025-05-19 20:47:54.327518 | TASK [stage-output : Make all log files readable] 2025-05-19 20:47:54.632379 | orchestrator | ok 2025-05-19 20:47:54.641872 | 2025-05-19 20:47:54.642036 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-19 20:47:54.676674 | orchestrator | skipping: Conditional result was False 2025-05-19 20:47:54.689579 | 2025-05-19 20:47:54.689722 | TASK [stage-output : Discover log files for compression] 2025-05-19 20:47:54.713998 | orchestrator | skipping: Conditional result was False 2025-05-19 20:47:54.726765 | 2025-05-19 20:47:54.727030 | LOOP [stage-output : Archive everything from logs] 2025-05-19 20:47:54.770284 | 2025-05-19 20:47:54.770483 | PLAY [Post cleanup play] 2025-05-19 20:47:54.778392 | 2025-05-19 20:47:54.778507 | TASK [Set cloud fact (Zuul deployment)] 2025-05-19 20:47:54.834155 | orchestrator | ok 2025-05-19 20:47:54.845215 | 2025-05-19 20:47:54.845337 | TASK [Set cloud fact (local deployment)] 2025-05-19 20:47:54.879328 | orchestrator | skipping: Conditional result was False 2025-05-19 20:47:54.894002 | 2025-05-19 20:47:54.894154 | TASK [Clean the cloud environment] 2025-05-19 20:47:55.502218 | orchestrator | 2025-05-19 20:47:55 - clean up servers 2025-05-19 20:47:56.257688 | orchestrator | 2025-05-19 20:47:56 - testbed-manager 2025-05-19 20:47:56.344585 | orchestrator | 2025-05-19 20:47:56 - testbed-node-4 2025-05-19 20:47:56.438275 | orchestrator | 2025-05-19 20:47:56 - testbed-node-2 2025-05-19 20:47:56.521793 | orchestrator | 2025-05-19 20:47:56 - testbed-node-0 2025-05-19 20:47:56.610112 | orchestrator | 2025-05-19 20:47:56 - testbed-node-3 2025-05-19 20:47:56.699915 | orchestrator | 2025-05-19 20:47:56 - testbed-node-1 2025-05-19 20:47:56.786623 | orchestrator | 2025-05-19 20:47:56 - testbed-node-5 2025-05-19 20:47:56.868913 | orchestrator | 2025-05-19 20:47:56 - clean up keypairs 2025-05-19 20:47:56.889281 | orchestrator | 2025-05-19 20:47:56 - testbed 2025-05-19 20:47:56.915588 | orchestrator | 2025-05-19 20:47:56 - wait for servers to be gone 2025-05-19 20:48:07.723912 | orchestrator | 2025-05-19 20:48:07 - clean up ports 2025-05-19 20:48:07.919528 | orchestrator | 2025-05-19 20:48:07 - 059301bf-5ab7-4a03-acef-0aef8cd20057 2025-05-19 20:48:08.203553 | orchestrator | 2025-05-19 20:48:08 - 1b25cc5b-40fb-44ee-8f6a-e477580e3cb5 2025-05-19 20:48:08.691266 | orchestrator | 2025-05-19 20:48:08 - 4ce3a513-5ebc-46b9-aab1-e7645316e7ac 2025-05-19 20:48:08.894484 | orchestrator | 2025-05-19 20:48:08 - 66fd0c61-ec1a-4df2-9e40-3166759c40f5 2025-05-19 20:48:09.120555 | orchestrator | 2025-05-19 20:48:09 - 6b11645d-6d43-459a-8103-4ebc61e053cb 2025-05-19 20:48:09.450381 | orchestrator | 2025-05-19 20:48:09 - 94b10b4a-c5f6-4f5e-8d0e-62b6ad505056 2025-05-19 20:48:10.080849 | orchestrator | 2025-05-19 20:48:10 - b914f48d-ceba-4087-80db-371a156df794 2025-05-19 20:48:10.306651 | orchestrator | 2025-05-19 20:48:10 - clean up volumes 2025-05-19 20:48:10.423970 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-1-node-base 2025-05-19 20:48:10.462990 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-4-node-base 2025-05-19 20:48:10.503360 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-manager-base 2025-05-19 20:48:10.546608 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-0-node-base 2025-05-19 20:48:10.593455 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-5-node-base 2025-05-19 20:48:10.634896 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-3-node-base 2025-05-19 20:48:10.679438 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-2-node-base 2025-05-19 20:48:10.721107 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-7-node-4 2025-05-19 20:48:10.766262 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-0-node-3 2025-05-19 20:48:10.808692 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-8-node-5 2025-05-19 20:48:10.852807 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-6-node-3 2025-05-19 20:48:10.895065 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-5-node-5 2025-05-19 20:48:10.942718 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-1-node-4 2025-05-19 20:48:10.987808 | orchestrator | 2025-05-19 20:48:10 - testbed-volume-4-node-4 2025-05-19 20:48:11.027684 | orchestrator | 2025-05-19 20:48:11 - testbed-volume-2-node-5 2025-05-19 20:48:11.069344 | orchestrator | 2025-05-19 20:48:11 - testbed-volume-3-node-3 2025-05-19 20:48:11.116403 | orchestrator | 2025-05-19 20:48:11 - disconnect routers 2025-05-19 20:48:11.709967 | orchestrator | 2025-05-19 20:48:11 - testbed 2025-05-19 20:48:12.632957 | orchestrator | 2025-05-19 20:48:12 - clean up subnets 2025-05-19 20:48:12.705878 | orchestrator | 2025-05-19 20:48:12 - subnet-testbed-management 2025-05-19 20:48:12.855170 | orchestrator | 2025-05-19 20:48:12 - clean up networks 2025-05-19 20:48:12.981553 | orchestrator | 2025-05-19 20:48:12 - net-testbed-management 2025-05-19 20:48:13.338790 | orchestrator | 2025-05-19 20:48:13 - clean up security groups 2025-05-19 20:48:13.375571 | orchestrator | 2025-05-19 20:48:13 - testbed-management 2025-05-19 20:48:13.486113 | orchestrator | 2025-05-19 20:48:13 - testbed-node 2025-05-19 20:48:13.590958 | orchestrator | 2025-05-19 20:48:13 - clean up floating ips 2025-05-19 20:48:13.622482 | orchestrator | 2025-05-19 20:48:13 - 81.163.193.40 2025-05-19 20:48:13.958274 | orchestrator | 2025-05-19 20:48:13 - clean up routers 2025-05-19 20:48:14.066277 | orchestrator | 2025-05-19 20:48:14 - testbed 2025-05-19 20:48:14.951218 | orchestrator | ok: Runtime: 0:00:19.671437 2025-05-19 20:48:14.954948 | 2025-05-19 20:48:14.955142 | PLAY RECAP 2025-05-19 20:48:14.955246 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-19 20:48:14.955295 | 2025-05-19 20:48:15.119775 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-19 20:48:15.122574 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-19 20:48:15.897198 | 2025-05-19 20:48:15.897438 | PLAY [Cleanup play] 2025-05-19 20:48:15.914279 | 2025-05-19 20:48:15.914426 | TASK [Set cloud fact (Zuul deployment)] 2025-05-19 20:48:15.972911 | orchestrator | ok 2025-05-19 20:48:15.982972 | 2025-05-19 20:48:15.983194 | TASK [Set cloud fact (local deployment)] 2025-05-19 20:48:16.019624 | orchestrator | skipping: Conditional result was False 2025-05-19 20:48:16.035330 | 2025-05-19 20:48:16.035479 | TASK [Clean the cloud environment] 2025-05-19 20:48:17.113383 | orchestrator | 2025-05-19 20:48:17 - clean up servers 2025-05-19 20:48:17.571711 | orchestrator | 2025-05-19 20:48:17 - clean up keypairs 2025-05-19 20:48:17.590774 | orchestrator | 2025-05-19 20:48:17 - wait for servers to be gone 2025-05-19 20:48:17.632628 | orchestrator | 2025-05-19 20:48:17 - clean up ports 2025-05-19 20:48:17.709183 | orchestrator | 2025-05-19 20:48:17 - clean up volumes 2025-05-19 20:48:17.768673 | orchestrator | 2025-05-19 20:48:17 - disconnect routers 2025-05-19 20:48:17.796005 | orchestrator | 2025-05-19 20:48:17 - clean up subnets 2025-05-19 20:48:17.825967 | orchestrator | 2025-05-19 20:48:17 - clean up networks 2025-05-19 20:48:17.976253 | orchestrator | 2025-05-19 20:48:17 - clean up security groups 2025-05-19 20:48:18.011574 | orchestrator | 2025-05-19 20:48:18 - clean up floating ips 2025-05-19 20:48:18.037068 | orchestrator | 2025-05-19 20:48:18 - clean up routers 2025-05-19 20:48:18.572479 | orchestrator | ok: Runtime: 0:00:01.258969 2025-05-19 20:48:18.576423 | 2025-05-19 20:48:18.576592 | PLAY RECAP 2025-05-19 20:48:18.576715 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-19 20:48:18.576780 | 2025-05-19 20:48:18.709267 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-19 20:48:18.710301 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-19 20:48:19.467833 | 2025-05-19 20:48:19.468045 | PLAY [Base post-fetch] 2025-05-19 20:48:19.483577 | 2025-05-19 20:48:19.483708 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-19 20:48:19.539248 | orchestrator | skipping: Conditional result was False 2025-05-19 20:48:19.553552 | 2025-05-19 20:48:19.553770 | TASK [fetch-output : Set log path for single node] 2025-05-19 20:48:19.589110 | orchestrator | ok 2025-05-19 20:48:19.597196 | 2025-05-19 20:48:19.597324 | LOOP [fetch-output : Ensure local output dirs] 2025-05-19 20:48:20.075978 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/work/logs" 2025-05-19 20:48:20.355674 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/work/artifacts" 2025-05-19 20:48:20.646758 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/38c228cfd6f947b9850ab9dad5977ef2/work/docs" 2025-05-19 20:48:20.672387 | 2025-05-19 20:48:20.672606 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-19 20:48:21.646814 | orchestrator | changed: .d..t...... ./ 2025-05-19 20:48:21.647438 | orchestrator | changed: All items complete 2025-05-19 20:48:21.647530 | 2025-05-19 20:48:22.381892 | orchestrator | changed: .d..t...... ./ 2025-05-19 20:48:23.167182 | orchestrator | changed: .d..t...... ./ 2025-05-19 20:48:23.198646 | 2025-05-19 20:48:23.198813 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-19 20:48:23.236797 | orchestrator | skipping: Conditional result was False 2025-05-19 20:48:23.239261 | orchestrator | skipping: Conditional result was False 2025-05-19 20:48:23.257337 | 2025-05-19 20:48:23.257484 | PLAY RECAP 2025-05-19 20:48:23.257569 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-19 20:48:23.257613 | 2025-05-19 20:48:23.394215 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-19 20:48:23.396574 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-19 20:48:24.141491 | 2025-05-19 20:48:24.141670 | PLAY [Base post] 2025-05-19 20:48:24.156536 | 2025-05-19 20:48:24.156662 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-19 20:48:25.124415 | orchestrator | changed 2025-05-19 20:48:25.132036 | 2025-05-19 20:48:25.132148 | PLAY RECAP 2025-05-19 20:48:25.132216 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-19 20:48:25.132281 | 2025-05-19 20:48:25.261334 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-19 20:48:25.262377 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-19 20:48:26.159953 | 2025-05-19 20:48:26.160163 | PLAY [Base post-logs] 2025-05-19 20:48:26.172361 | 2025-05-19 20:48:26.172538 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-19 20:48:26.696023 | localhost | changed 2025-05-19 20:48:26.706560 | 2025-05-19 20:48:26.706732 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-19 20:48:26.732780 | localhost | ok 2025-05-19 20:48:26.736481 | 2025-05-19 20:48:26.736606 | TASK [Set zuul-log-path fact] 2025-05-19 20:48:26.763231 | localhost | ok 2025-05-19 20:48:26.771379 | 2025-05-19 20:48:26.771489 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-19 20:48:26.807376 | localhost | ok 2025-05-19 20:48:26.811373 | 2025-05-19 20:48:26.811507 | TASK [upload-logs : Create log directories] 2025-05-19 20:48:27.393445 | localhost | changed 2025-05-19 20:48:27.399104 | 2025-05-19 20:48:27.399342 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-19 20:48:27.936033 | localhost -> localhost | ok: Runtime: 0:00:00.007596 2025-05-19 20:48:27.945573 | 2025-05-19 20:48:27.945764 | TASK [upload-logs : Upload logs to log server] 2025-05-19 20:48:28.527323 | localhost | Output suppressed because no_log was given 2025-05-19 20:48:28.532744 | 2025-05-19 20:48:28.533062 | LOOP [upload-logs : Compress console log and json output] 2025-05-19 20:48:28.592621 | localhost | skipping: Conditional result was False 2025-05-19 20:48:28.599557 | localhost | skipping: Conditional result was False 2025-05-19 20:48:28.606693 | 2025-05-19 20:48:28.606867 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-19 20:48:28.654659 | localhost | skipping: Conditional result was False 2025-05-19 20:48:28.655309 | 2025-05-19 20:48:28.658805 | localhost | skipping: Conditional result was False 2025-05-19 20:48:28.668301 | 2025-05-19 20:48:28.668543 | LOOP [upload-logs : Upload console log and json output]